Test Report: Docker_Linux_containerd_arm64 21975

                    
                      bf5d9cb38ae1a2b3e4a9e22e363e3b0c86085c7c:2025-11-24:42481
                    
                

Test fail (4/333)

Order failed test Duration
301 TestStartStop/group/old-k8s-version/serial/DeployApp 12.85
314 TestStartStop/group/no-preload/serial/DeployApp 12.97
315 TestStartStop/group/embed-certs/serial/DeployApp 14.67
346 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 16.61
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-098965 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b377806c-ae20-44d2-9d0f-07b097026328] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b377806c-ae20-44d2-9d0f-07b097026328] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.003571415s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-098965 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-098965
helpers_test.go:243: (dbg) docker inspect old-k8s-version-098965:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022",
	        "Created": "2025-11-24T03:37:06.167962609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 457210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:37:06.24041942Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022/hostname",
	        "HostsPath": "/var/lib/docker/containers/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022/hosts",
	        "LogPath": "/var/lib/docker/containers/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022-json.log",
	        "Name": "/old-k8s-version-098965",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-098965:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-098965",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022",
	                "LowerDir": "/var/lib/docker/overlay2/8effb39b7e48dc2e06628c564f9eb8d7a6134b67b474f4243a9f92d81eed72e6-init/diff:/var/lib/docker/overlay2/11b197f530f0d571f61892814d8d4c774f7d3e5a97abdd8c5aa182cc99b2d856/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8effb39b7e48dc2e06628c564f9eb8d7a6134b67b474f4243a9f92d81eed72e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8effb39b7e48dc2e06628c564f9eb8d7a6134b67b474f4243a9f92d81eed72e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8effb39b7e48dc2e06628c564f9eb8d7a6134b67b474f4243a9f92d81eed72e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-098965",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-098965/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-098965",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-098965",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-098965",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1fdc3bd4111da77a7219abec40237713d3aafb5294361ea9ac940f031b5e9874",
	            "SandboxKey": "/var/run/docker/netns/1fdc3bd4111d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-098965": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:8b:8f:f7:48:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a787be3020cdc92e0572d92f4bf90ce3f3c7948fc2d2deef82cd4a5f099c319a",
	                    "EndpointID": "0c39f7b7035a14f48a48394a60897ac0eb2db5edb711c6ca54097ce4804ab54d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-098965",
	                        "51b62bc50b58"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-098965 -n old-k8s-version-098965
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-098965 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-098965 logs -n 25: (1.215717345s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-842431 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo containerd config dump                                                                                                                                                                                                        │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo crio config                                                                                                                                                                                                                   │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ delete  │ -p cilium-842431                                                                                                                                                                                                                                    │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:35 UTC │
	│ start   │ -p force-systemd-env-574539 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:35 UTC │
	│ delete  │ -p kubernetes-upgrade-850960                                                                                                                                                                                                                        │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ force-systemd-env-574539 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p force-systemd-env-574539                                                                                                                                                                                                                         │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-options-216763 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ cert-options-216763 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ -p cert-options-216763 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p cert-options-216763                                                                                                                                                                                                                              │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:38 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:36:59
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:36:59.219087  456828 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:36:59.219245  456828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:36:59.219258  456828 out.go:374] Setting ErrFile to fd 2...
	I1124 03:36:59.219263  456828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:36:59.219545  456828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:36:59.220001  456828 out.go:368] Setting JSON to false
	I1124 03:36:59.221277  456828 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8348,"bootTime":1763947072,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:36:59.221357  456828 start.go:143] virtualization:  
	I1124 03:36:59.227637  456828 out.go:179] * [old-k8s-version-098965] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:36:59.231151  456828 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:36:59.231327  456828 notify.go:221] Checking for updates...
	I1124 03:36:59.238122  456828 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:36:59.241423  456828 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:36:59.244671  456828 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:36:59.247794  456828 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:36:59.250835  456828 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:36:59.254529  456828 config.go:182] Loaded profile config "cert-expiration-846384": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:36:59.254702  456828 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:36:59.294116  456828 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:36:59.294240  456828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:36:59.357118  456828 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:36:59.346945612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:36:59.357230  456828 docker.go:319] overlay module found
	I1124 03:36:59.360704  456828 out.go:179] * Using the docker driver based on user configuration
	I1124 03:36:59.363700  456828 start.go:309] selected driver: docker
	I1124 03:36:59.363727  456828 start.go:927] validating driver "docker" against <nil>
	I1124 03:36:59.363759  456828 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:36:59.364561  456828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:36:59.415697  456828 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:36:59.406291614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:36:59.415854  456828 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:36:59.416110  456828 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:36:59.419249  456828 out.go:179] * Using Docker driver with root privileges
	I1124 03:36:59.422257  456828 cni.go:84] Creating CNI manager for ""
	I1124 03:36:59.422344  456828 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:36:59.422359  456828 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:36:59.422448  456828 start.go:353] cluster config:
	{Name:old-k8s-version-098965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:36:59.425538  456828 out.go:179] * Starting "old-k8s-version-098965" primary control-plane node in "old-k8s-version-098965" cluster
	I1124 03:36:59.428289  456828 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:36:59.431323  456828 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:36:59.434113  456828 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:36:59.434165  456828 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1124 03:36:59.434176  456828 cache.go:65] Caching tarball of preloaded images
	I1124 03:36:59.434207  456828 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:36:59.434261  456828 preload.go:238] Found /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 03:36:59.434272  456828 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1124 03:36:59.434383  456828 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/config.json ...
	I1124 03:36:59.434401  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/config.json: {Name:mk69515acb07727840b36c87604cba4bd531db8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:36:59.454350  456828 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:36:59.454376  456828 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:36:59.454396  456828 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:36:59.454427  456828 start.go:360] acquireMachinesLock for old-k8s-version-098965: {Name:mkfaf6c0e20ffd0f03bcaf5e2568b90f1af41e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:36:59.454522  456828 start.go:364] duration metric: took 80.46µs to acquireMachinesLock for "old-k8s-version-098965"
	I1124 03:36:59.454546  456828 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-098965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:36:59.454620  456828 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:36:59.460555  456828 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:36:59.460925  456828 start.go:159] libmachine.API.Create for "old-k8s-version-098965" (driver="docker")
	I1124 03:36:59.460965  456828 client.go:173] LocalClient.Create starting
	I1124 03:36:59.461101  456828 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem
	I1124 03:36:59.461190  456828 main.go:143] libmachine: Decoding PEM data...
	I1124 03:36:59.461213  456828 main.go:143] libmachine: Parsing certificate...
	I1124 03:36:59.461265  456828 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem
	I1124 03:36:59.461289  456828 main.go:143] libmachine: Decoding PEM data...
	I1124 03:36:59.461301  456828 main.go:143] libmachine: Parsing certificate...
	I1124 03:36:59.461680  456828 cli_runner.go:164] Run: docker network inspect old-k8s-version-098965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:36:59.477952  456828 cli_runner.go:211] docker network inspect old-k8s-version-098965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:36:59.478060  456828 network_create.go:284] running [docker network inspect old-k8s-version-098965] to gather additional debugging logs...
	I1124 03:36:59.478084  456828 cli_runner.go:164] Run: docker network inspect old-k8s-version-098965
	W1124 03:36:59.501264  456828 cli_runner.go:211] docker network inspect old-k8s-version-098965 returned with exit code 1
	I1124 03:36:59.501311  456828 network_create.go:287] error running [docker network inspect old-k8s-version-098965]: docker network inspect old-k8s-version-098965: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-098965 not found
	I1124 03:36:59.501326  456828 network_create.go:289] output of [docker network inspect old-k8s-version-098965]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-098965 not found
	
	** /stderr **
	I1124 03:36:59.501444  456828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:36:59.520261  456828 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-752aaa40bb3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:00:20:e4:71:15} reservation:<nil>}
	I1124 03:36:59.520804  456828 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbb0dee281db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:ff:07:3e:91:0f} reservation:<nil>}
	I1124 03:36:59.521086  456828 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d95ffec60547 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:b5:f2:ed:07:1e} reservation:<nil>}
	I1124 03:36:59.521451  456828 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1b3e5c8c3c27 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:8b:7b:bd:23:4e} reservation:<nil>}
	I1124 03:36:59.521977  456828 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3e7d0}
	I1124 03:36:59.522000  456828 network_create.go:124] attempt to create docker network old-k8s-version-098965 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 03:36:59.522073  456828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-098965 old-k8s-version-098965
	I1124 03:36:59.585194  456828 network_create.go:108] docker network old-k8s-version-098965 192.168.85.0/24 created
	I1124 03:36:59.585224  456828 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-098965" container
	I1124 03:36:59.585319  456828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:36:59.601540  456828 cli_runner.go:164] Run: docker volume create old-k8s-version-098965 --label name.minikube.sigs.k8s.io=old-k8s-version-098965 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:36:59.619479  456828 oci.go:103] Successfully created a docker volume old-k8s-version-098965
	I1124 03:36:59.619575  456828 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-098965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-098965 --entrypoint /usr/bin/test -v old-k8s-version-098965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:37:00.531368  456828 oci.go:107] Successfully prepared a docker volume old-k8s-version-098965
	I1124 03:37:00.531432  456828 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:37:00.531457  456828 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:37:00.531528  456828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-098965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:37:06.094759  456828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-098965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.56319468s)
	I1124 03:37:06.094794  456828 kic.go:203] duration metric: took 5.563348412s to extract preloaded images to volume ...
	W1124 03:37:06.094942  456828 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:37:06.095054  456828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:37:06.151713  456828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-098965 --name old-k8s-version-098965 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-098965 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-098965 --network old-k8s-version-098965 --ip 192.168.85.2 --volume old-k8s-version-098965:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:37:06.468910  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Running}}
	I1124 03:37:06.491115  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:06.525628  456828 cli_runner.go:164] Run: docker exec old-k8s-version-098965 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:37:06.579576  456828 oci.go:144] the created container "old-k8s-version-098965" has a running status.
	I1124 03:37:06.579609  456828 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa...
	I1124 03:37:06.729919  456828 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:37:06.749775  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:06.775133  456828 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:37:06.775157  456828 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-098965 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:37:06.826229  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:06.849606  456828 machine.go:94] provisionDockerMachine start ...
	I1124 03:37:06.849724  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:06.876665  456828 main.go:143] libmachine: Using SSH client type: native
	I1124 03:37:06.877012  456828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1124 03:37:06.877028  456828 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:37:06.877699  456828 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43434->127.0.0.1:33418: read: connection reset by peer
	I1124 03:37:10.029639  456828 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098965
	
	I1124 03:37:10.029683  456828 ubuntu.go:182] provisioning hostname "old-k8s-version-098965"
	I1124 03:37:10.029771  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:10.053348  456828 main.go:143] libmachine: Using SSH client type: native
	I1124 03:37:10.053702  456828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1124 03:37:10.053721  456828 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098965 && echo "old-k8s-version-098965" | sudo tee /etc/hostname
	I1124 03:37:10.214587  456828 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098965
	
	I1124 03:37:10.214746  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:10.234062  456828 main.go:143] libmachine: Using SSH client type: native
	I1124 03:37:10.234384  456828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1124 03:37:10.234408  456828 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098965' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098965/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098965' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:37:10.384740  456828 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:37:10.384768  456828 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-255205/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-255205/.minikube}
	I1124 03:37:10.384789  456828 ubuntu.go:190] setting up certificates
	I1124 03:37:10.384814  456828 provision.go:84] configureAuth start
	I1124 03:37:10.384887  456828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098965
	I1124 03:37:10.402644  456828 provision.go:143] copyHostCerts
	I1124 03:37:10.402723  456828 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem, removing ...
	I1124 03:37:10.402738  456828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem
	I1124 03:37:10.402815  456828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem (1078 bytes)
	I1124 03:37:10.402908  456828 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem, removing ...
	I1124 03:37:10.402916  456828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem
	I1124 03:37:10.402943  456828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem (1123 bytes)
	I1124 03:37:10.403038  456828 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem, removing ...
	I1124 03:37:10.403049  456828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem
	I1124 03:37:10.403076  456828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem (1675 bytes)
	I1124 03:37:10.403135  456828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098965 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-098965]
	I1124 03:37:10.629214  456828 provision.go:177] copyRemoteCerts
	I1124 03:37:10.629287  456828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:37:10.629356  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:10.650240  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:10.757128  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:37:10.776341  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 03:37:10.795643  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:37:10.814914  456828 provision.go:87] duration metric: took 430.069497ms to configureAuth
	I1124 03:37:10.814945  456828 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:37:10.815151  456828 config.go:182] Loaded profile config "old-k8s-version-098965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:37:10.815164  456828 machine.go:97] duration metric: took 3.965526971s to provisionDockerMachine
	I1124 03:37:10.815172  456828 client.go:176] duration metric: took 11.35420095s to LocalClient.Create
	I1124 03:37:10.815193  456828 start.go:167] duration metric: took 11.354269562s to libmachine.API.Create "old-k8s-version-098965"
	I1124 03:37:10.815206  456828 start.go:293] postStartSetup for "old-k8s-version-098965" (driver="docker")
	I1124 03:37:10.815216  456828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:37:10.815268  456828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:37:10.815313  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:10.835952  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:10.940940  456828 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:37:10.944392  456828 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:37:10.944420  456828 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:37:10.944432  456828 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/addons for local assets ...
	I1124 03:37:10.944531  456828 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/files for local assets ...
	I1124 03:37:10.944626  456828 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem -> 2570692.pem in /etc/ssl/certs
	I1124 03:37:10.944739  456828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:37:10.953432  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:37:10.971988  456828 start.go:296] duration metric: took 156.749021ms for postStartSetup
	I1124 03:37:10.972390  456828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098965
	I1124 03:37:10.990273  456828 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/config.json ...
	I1124 03:37:10.990577  456828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:37:10.990655  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:11.011483  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:11.114126  456828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:37:11.120113  456828 start.go:128] duration metric: took 11.665472167s to createHost
	I1124 03:37:11.120148  456828 start.go:83] releasing machines lock for "old-k8s-version-098965", held for 11.665617423s
	I1124 03:37:11.120263  456828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098965
	I1124 03:37:11.139454  456828 ssh_runner.go:195] Run: cat /version.json
	I1124 03:37:11.139475  456828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:37:11.139509  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:11.139546  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:11.159657  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:11.178004  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:11.355366  456828 ssh_runner.go:195] Run: systemctl --version
	I1124 03:37:11.362316  456828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:37:11.366848  456828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:37:11.366938  456828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:37:11.395828  456828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:37:11.395905  456828 start.go:496] detecting cgroup driver to use...
	I1124 03:37:11.395958  456828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:37:11.396051  456828 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:37:11.412427  456828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:37:11.425664  456828 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:37:11.425739  456828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:37:11.443137  456828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:37:11.466719  456828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:37:11.592922  456828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:37:11.733496  456828 docker.go:234] disabling docker service ...
	I1124 03:37:11.733625  456828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:37:11.756653  456828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:37:11.773475  456828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:37:11.921229  456828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:37:12.062701  456828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:37:12.076946  456828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:37:12.092118  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 03:37:12.101736  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:37:12.111290  456828 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 03:37:12.111365  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 03:37:12.120980  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:37:12.130335  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:37:12.139831  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:37:12.149028  456828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:37:12.158976  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:37:12.168289  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:37:12.179157  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:37:12.189127  456828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:37:12.196909  456828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:37:12.204634  456828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:37:12.341578  456828 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:37:12.476923  456828 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:37:12.477008  456828 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:37:12.481370  456828 start.go:564] Will wait 60s for crictl version
	I1124 03:37:12.481445  456828 ssh_runner.go:195] Run: which crictl
	I1124 03:37:12.485391  456828 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:37:12.516194  456828 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:37:12.516279  456828 ssh_runner.go:195] Run: containerd --version
	I1124 03:37:12.539070  456828 ssh_runner.go:195] Run: containerd --version
	I1124 03:37:12.568276  456828 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 03:37:12.571173  456828 cli_runner.go:164] Run: docker network inspect old-k8s-version-098965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:37:12.589022  456828 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:37:12.593373  456828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:37:12.603275  456828 kubeadm.go:884] updating cluster {Name:old-k8s-version-098965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:37:12.603400  456828 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:37:12.603468  456828 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:37:12.632602  456828 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:37:12.632625  456828 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:37:12.632687  456828 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:37:12.658408  456828 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:37:12.658429  456828 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:37:12.658437  456828 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1124 03:37:12.658540  456828 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-098965 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:37:12.658617  456828 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:37:12.684758  456828 cni.go:84] Creating CNI manager for ""
	I1124 03:37:12.684785  456828 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:37:12.684799  456828 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:37:12.684823  456828 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098965 NodeName:old-k8s-version-098965 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:37:12.684947  456828 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-098965"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:37:12.685018  456828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 03:37:12.693523  456828 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:37:12.693645  456828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:37:12.702042  456828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 03:37:12.715991  456828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:37:12.729770  456828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1124 03:37:12.743578  456828 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:37:12.747534  456828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:37:12.757715  456828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:37:12.884405  456828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:37:12.905132  456828 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965 for IP: 192.168.85.2
	I1124 03:37:12.905158  456828 certs.go:195] generating shared ca certs ...
	I1124 03:37:12.905175  456828 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:12.905388  456828 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:37:12.905463  456828 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:37:12.905478  456828 certs.go:257] generating profile certs ...
	I1124 03:37:12.905558  456828 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.key
	I1124 03:37:12.905577  456828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt with IP's: []
	I1124 03:37:13.092952  456828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt ...
	I1124 03:37:13.092989  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: {Name:mkdd0fa6209ccf6aa2aa41557354bcbc75868f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.093227  456828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.key ...
	I1124 03:37:13.093245  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.key: {Name:mk9ee935a6f1a8dd6673b97d66ec46cca5ad1664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.093351  456828 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key.56338243
	I1124 03:37:13.093373  456828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt.56338243 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:37:13.449033  456828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt.56338243 ...
	I1124 03:37:13.449067  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt.56338243: {Name:mk65bad85814a0a12971d39286d0e5c451efbbb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.449251  456828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key.56338243 ...
	I1124 03:37:13.449269  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key.56338243: {Name:mk64b57d615101ac92823627ae52dbd8c44bfea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.449353  456828 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt.56338243 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt
	I1124 03:37:13.449436  456828 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key.56338243 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key
	I1124 03:37:13.449500  456828 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.key
	I1124 03:37:13.449520  456828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.crt with IP's: []
	I1124 03:37:13.614481  456828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.crt ...
	I1124 03:37:13.614513  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.crt: {Name:mk7045112c74be0d05a12bbf47e455d86596546e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.614698  456828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.key ...
	I1124 03:37:13.614719  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.key: {Name:mkbadb2ce2b4c7ecb9f7755942cb7ff8139714e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.614923  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:37:13.614972  456828 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:37:13.614987  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:37:13.615022  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:37:13.615052  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:37:13.615080  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:37:13.615129  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:37:13.615732  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:37:13.637058  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:37:13.656196  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:37:13.675513  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:37:13.694987  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 03:37:13.714921  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:37:13.733764  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:37:13.753359  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:37:13.772856  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:37:13.791271  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:37:13.810674  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:37:13.830260  456828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:37:13.845031  456828 ssh_runner.go:195] Run: openssl version
	I1124 03:37:13.854200  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:37:13.864090  456828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:37:13.868253  456828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:37:13.868333  456828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:37:13.910904  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:37:13.920025  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:37:13.928734  456828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:37:13.932666  456828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:37:13.932765  456828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:37:13.979663  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:37:13.988918  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:37:13.998028  456828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:37:14.003766  456828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:37:14.003942  456828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:37:14.050814  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:37:14.059590  456828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:37:14.063378  456828 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:37:14.063446  456828 kubeadm.go:401] StartCluster: {Name:old-k8s-version-098965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:37:14.063519  456828 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:37:14.063584  456828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:37:14.095036  456828 cri.go:89] found id: ""
	I1124 03:37:14.095112  456828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:37:14.103077  456828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:37:14.111415  456828 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:37:14.111511  456828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:37:14.120533  456828 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:37:14.120557  456828 kubeadm.go:158] found existing configuration files:
	
	I1124 03:37:14.120636  456828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:37:14.129223  456828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:37:14.129299  456828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:37:14.137169  456828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:37:14.145264  456828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:37:14.145326  456828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:37:14.153440  456828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:37:14.161802  456828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:37:14.161868  456828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:37:14.169170  456828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:37:14.176729  456828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:37:14.176794  456828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:37:14.184164  456828 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:37:14.235215  456828 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 03:37:14.235279  456828 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:37:14.275786  456828 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:37:14.275863  456828 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:37:14.275904  456828 kubeadm.go:319] OS: Linux
	I1124 03:37:14.275954  456828 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:37:14.276007  456828 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:37:14.276058  456828 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:37:14.276111  456828 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:37:14.276161  456828 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:37:14.276213  456828 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:37:14.276262  456828 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:37:14.276314  456828 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:37:14.276364  456828 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:37:14.361031  456828 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:37:14.361200  456828 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:37:14.361333  456828 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 03:37:14.534299  456828 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:37:14.537587  456828 out.go:252]   - Generating certificates and keys ...
	I1124 03:37:14.537751  456828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:37:14.537876  456828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:37:15.136064  456828 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:37:15.790461  456828 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:37:16.745198  456828 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:37:17.101081  456828 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:37:17.816844  456828 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:37:17.817225  456828 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-098965] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:37:18.708622  456828 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:37:18.708941  456828 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-098965] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:37:19.626997  456828 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:37:20.013744  456828 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:37:21.332223  456828 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:37:21.333010  456828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:37:21.538924  456828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:37:21.950934  456828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:37:23.178695  456828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:37:23.307692  456828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:37:23.308662  456828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:37:23.312055  456828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:37:23.317674  456828 out.go:252]   - Booting up control plane ...
	I1124 03:37:23.317788  456828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:37:23.317867  456828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:37:23.317934  456828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:37:23.338190  456828 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:37:23.339603  456828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:37:23.339662  456828 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:37:23.480314  456828 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 03:37:31.483609  456828 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.003694 seconds
	I1124 03:37:31.483744  456828 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:37:31.502417  456828 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:37:32.033208  456828 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:37:32.033430  456828 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-098965 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:37:32.546541  456828 kubeadm.go:319] [bootstrap-token] Using token: ycw9qc.7i65x4n1zr1z1k2d
	I1124 03:37:32.549515  456828 out.go:252]   - Configuring RBAC rules ...
	I1124 03:37:32.549646  456828 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:37:32.555568  456828 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:37:32.568444  456828 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:37:32.576351  456828 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:37:32.581974  456828 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:37:32.586465  456828 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:37:32.604043  456828 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:37:32.913255  456828 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:37:32.963682  456828 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:37:32.981293  456828 kubeadm.go:319] 
	I1124 03:37:32.981375  456828 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:37:32.981391  456828 kubeadm.go:319] 
	I1124 03:37:32.981470  456828 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:37:32.981490  456828 kubeadm.go:319] 
	I1124 03:37:32.981515  456828 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:37:32.983497  456828 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:37:32.983565  456828 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:37:32.983572  456828 kubeadm.go:319] 
	I1124 03:37:32.983653  456828 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:37:32.983663  456828 kubeadm.go:319] 
	I1124 03:37:32.983715  456828 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:37:32.983723  456828 kubeadm.go:319] 
	I1124 03:37:32.983775  456828 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:37:32.983853  456828 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:37:32.983929  456828 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:37:32.983938  456828 kubeadm.go:319] 
	I1124 03:37:32.984037  456828 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:37:32.984117  456828 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:37:32.984123  456828 kubeadm.go:319] 
	I1124 03:37:32.984216  456828 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ycw9qc.7i65x4n1zr1z1k2d \
	I1124 03:37:32.984336  456828 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:37:32.984390  456828 kubeadm.go:319] 	--control-plane 
	I1124 03:37:32.984397  456828 kubeadm.go:319] 
	I1124 03:37:32.984704  456828 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:37:32.984716  456828 kubeadm.go:319] 
	I1124 03:37:32.984916  456828 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ycw9qc.7i65x4n1zr1z1k2d \
	I1124 03:37:32.985043  456828 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:37:32.991944  456828 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:37:32.992068  456828 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:37:32.992094  456828 cni.go:84] Creating CNI manager for ""
	I1124 03:37:32.992102  456828 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:37:32.995225  456828 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:37:32.998096  456828 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:37:33.004093  456828 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 03:37:33.004119  456828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:37:33.036441  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:37:34.155879  456828 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.119390679s)
	I1124 03:37:34.155921  456828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:37:34.156043  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-098965 minikube.k8s.io/updated_at=2025_11_24T03_37_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=old-k8s-version-098965 minikube.k8s.io/primary=true
	I1124 03:37:34.156059  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:34.370014  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:34.370081  456828 ops.go:34] apiserver oom_adj: -16
	I1124 03:37:34.870621  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:35.370425  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:35.870744  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:36.370591  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:36.870102  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:37.370755  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:37.870481  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:38.370729  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:38.870716  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:39.370861  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:39.870112  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:40.370985  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:40.870131  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:41.370224  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:41.870910  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:42.370129  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:42.870708  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:43.370299  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:43.870132  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:44.370373  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:44.870148  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:45.370873  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:45.870208  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:46.370930  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:46.870103  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:46.965988  456828 kubeadm.go:1114] duration metric: took 12.810009639s to wait for elevateKubeSystemPrivileges
	I1124 03:37:46.966014  456828 kubeadm.go:403] duration metric: took 32.902577839s to StartCluster
	I1124 03:37:46.966033  456828 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:46.966096  456828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:37:46.967091  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:46.967316  456828 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:37:46.967431  456828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:37:46.967681  456828 config.go:182] Loaded profile config "old-k8s-version-098965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:37:46.967714  456828 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:37:46.967774  456828 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-098965"
	I1124 03:37:46.967788  456828 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-098965"
	I1124 03:37:46.967809  456828 host.go:66] Checking if "old-k8s-version-098965" exists ...
	I1124 03:37:46.968568  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:46.969265  456828 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-098965"
	I1124 03:37:46.969293  456828 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-098965"
	I1124 03:37:46.969620  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:46.976026  456828 out.go:179] * Verifying Kubernetes components...
	I1124 03:37:46.980332  456828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:37:47.005475  456828 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-098965"
	I1124 03:37:47.005521  456828 host.go:66] Checking if "old-k8s-version-098965" exists ...
	I1124 03:37:47.006016  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:47.021223  456828 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:37:47.025797  456828 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:37:47.025822  456828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:37:47.025899  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:47.043575  456828 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:37:47.043596  456828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:37:47.043662  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:47.067444  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:47.085937  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:47.279804  456828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:37:47.286358  456828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:37:47.448103  456828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:37:47.467412  456828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:37:48.289350  456828 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.002895345s)
	I1124 03:37:48.290385  456828 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-098965" to be "Ready" ...
	I1124 03:37:48.311715  456828 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.031826007s)
	I1124 03:37:48.311750  456828 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:37:48.801783  456828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.334276635s)
	I1124 03:37:48.805329  456828 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 03:37:48.808579  456828 addons.go:530] duration metric: took 1.840852722s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 03:37:48.816214  456828 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-098965" context rescaled to 1 replicas
	W1124 03:37:50.294251  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	W1124 03:37:52.793560  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	W1124 03:37:54.793903  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	W1124 03:37:56.794298  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	W1124 03:37:59.293536  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	I1124 03:38:00.344278  456828 node_ready.go:49] node "old-k8s-version-098965" is "Ready"
	I1124 03:38:00.344383  456828 node_ready.go:38] duration metric: took 12.053923317s for node "old-k8s-version-098965" to be "Ready" ...
	I1124 03:38:00.344417  456828 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:38:00.344536  456828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:38:00.402952  456828 api_server.go:72] duration metric: took 13.435606359s to wait for apiserver process to appear ...
	I1124 03:38:00.402979  456828 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:38:00.403000  456828 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:38:00.414315  456828 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:38:00.416457  456828 api_server.go:141] control plane version: v1.28.0
	I1124 03:38:00.416601  456828 api_server.go:131] duration metric: took 13.613451ms to wait for apiserver health ...
	I1124 03:38:00.416631  456828 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:38:00.425631  456828 system_pods.go:59] 8 kube-system pods found
	I1124 03:38:00.425724  456828 system_pods.go:61] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Pending
	I1124 03:38:00.425749  456828 system_pods.go:61] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:00.425783  456828 system_pods.go:61] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:00.425810  456828 system_pods.go:61] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:00.425830  456828 system_pods.go:61] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:00.425851  456828 system_pods.go:61] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:00.425879  456828 system_pods.go:61] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:00.425909  456828 system_pods.go:61] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:38:00.425943  456828 system_pods.go:74] duration metric: took 9.290401ms to wait for pod list to return data ...
	I1124 03:38:00.425969  456828 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:38:00.429247  456828 default_sa.go:45] found service account: "default"
	I1124 03:38:00.429360  456828 default_sa.go:55] duration metric: took 3.356866ms for default service account to be created ...
	I1124 03:38:00.429393  456828 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:38:00.435199  456828 system_pods.go:86] 8 kube-system pods found
	I1124 03:38:00.435313  456828 system_pods.go:89] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Pending
	I1124 03:38:00.435337  456828 system_pods.go:89] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:00.435376  456828 system_pods.go:89] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:00.435403  456828 system_pods.go:89] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:00.435426  456828 system_pods.go:89] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:00.435460  456828 system_pods.go:89] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:00.435482  456828 system_pods.go:89] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:00.435514  456828 system_pods.go:89] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:38:00.435576  456828 retry.go:31] will retry after 251.537949ms: missing components: kube-dns
	I1124 03:38:00.691897  456828 system_pods.go:86] 8 kube-system pods found
	I1124 03:38:00.691936  456828 system_pods.go:89] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:38:00.691943  456828 system_pods.go:89] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:00.691949  456828 system_pods.go:89] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:00.691954  456828 system_pods.go:89] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:00.691959  456828 system_pods.go:89] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:00.691968  456828 system_pods.go:89] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:00.691976  456828 system_pods.go:89] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:00.691981  456828 system_pods.go:89] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:38:00.691999  456828 retry.go:31] will retry after 269.359214ms: missing components: kube-dns
	I1124 03:38:00.970909  456828 system_pods.go:86] 8 kube-system pods found
	I1124 03:38:00.970944  456828 system_pods.go:89] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:38:00.970951  456828 system_pods.go:89] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:00.970957  456828 system_pods.go:89] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:00.970961  456828 system_pods.go:89] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:00.970966  456828 system_pods.go:89] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:00.970969  456828 system_pods.go:89] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:00.970973  456828 system_pods.go:89] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:00.970978  456828 system_pods.go:89] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:38:00.970996  456828 retry.go:31] will retry after 426.462867ms: missing components: kube-dns
	I1124 03:38:01.403286  456828 system_pods.go:86] 8 kube-system pods found
	I1124 03:38:01.403315  456828 system_pods.go:89] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Running
	I1124 03:38:01.403322  456828 system_pods.go:89] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:01.403330  456828 system_pods.go:89] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:01.403335  456828 system_pods.go:89] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:01.403341  456828 system_pods.go:89] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:01.403345  456828 system_pods.go:89] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:01.403349  456828 system_pods.go:89] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:01.403353  456828 system_pods.go:89] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Running
	I1124 03:38:01.403362  456828 system_pods.go:126] duration metric: took 973.897592ms to wait for k8s-apps to be running ...
	I1124 03:38:01.403373  456828 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:38:01.403427  456828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:38:01.421738  456828 system_svc.go:56] duration metric: took 18.35448ms WaitForService to wait for kubelet
	I1124 03:38:01.421765  456828 kubeadm.go:587] duration metric: took 14.454425317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:38:01.421786  456828 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:38:01.425010  456828 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:38:01.425044  456828 node_conditions.go:123] node cpu capacity is 2
	I1124 03:38:01.425059  456828 node_conditions.go:105] duration metric: took 3.267233ms to run NodePressure ...
	I1124 03:38:01.425099  456828 start.go:242] waiting for startup goroutines ...
	I1124 03:38:01.425108  456828 start.go:247] waiting for cluster config update ...
	I1124 03:38:01.425124  456828 start.go:256] writing updated cluster config ...
	I1124 03:38:01.425448  456828 ssh_runner.go:195] Run: rm -f paused
	I1124 03:38:01.429212  456828 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:38:01.435249  456828 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-2kmf2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.447218  456828 pod_ready.go:94] pod "coredns-5dd5756b68-2kmf2" is "Ready"
	I1124 03:38:01.447254  456828 pod_ready.go:86] duration metric: took 11.97007ms for pod "coredns-5dd5756b68-2kmf2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.452465  456828 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.459099  456828 pod_ready.go:94] pod "etcd-old-k8s-version-098965" is "Ready"
	I1124 03:38:01.459128  456828 pod_ready.go:86] duration metric: took 6.576599ms for pod "etcd-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.471032  456828 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.476662  456828 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-098965" is "Ready"
	I1124 03:38:01.476688  456828 pod_ready.go:86] duration metric: took 5.56861ms for pod "kube-apiserver-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.480096  456828 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.833649  456828 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-098965" is "Ready"
	I1124 03:38:01.833715  456828 pod_ready.go:86] duration metric: took 353.588012ms for pod "kube-controller-manager-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:02.035335  456828 pod_ready.go:83] waiting for pod "kube-proxy-5t7nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:02.433941  456828 pod_ready.go:94] pod "kube-proxy-5t7nq" is "Ready"
	I1124 03:38:02.433973  456828 pod_ready.go:86] duration metric: took 398.560828ms for pod "kube-proxy-5t7nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:02.633735  456828 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:03.033530  456828 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-098965" is "Ready"
	I1124 03:38:03.033561  456828 pod_ready.go:86] duration metric: took 399.801466ms for pod "kube-scheduler-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:03.033575  456828 pod_ready.go:40] duration metric: took 1.604321281s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:38:03.103182  456828 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1124 03:38:03.106579  456828 out.go:203] 
	W1124 03:38:03.109581  456828 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:38:03.112629  456828 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:38:03.116685  456828 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-098965" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	2473c87591ead       1611cd07b61d5       6 seconds ago       Running             busybox                   0                   0780548608168       busybox                                          default
	32ab776c7affb       ba04bb24b9575       11 seconds ago      Running             storage-provisioner       0                   ddcd48a171630       storage-provisioner                              kube-system
	28a52e8d1e9e4       97e04611ad434       11 seconds ago      Running             coredns                   0                   aa01d3a3f7cba       coredns-5dd5756b68-2kmf2                         kube-system
	37f20e76ffbc2       b1a8c6f707935       23 seconds ago      Running             kindnet-cni               0                   2a6bd814ac01e       kindnet-mctv9                                    kube-system
	4baa8c107b38c       940f54a5bcae9       25 seconds ago      Running             kube-proxy                0                   b85e6b6d514cc       kube-proxy-5t7nq                                 kube-system
	8fb25b361e023       9cdd6470f48c8       47 seconds ago      Running             etcd                      0                   b669262c23763       etcd-old-k8s-version-098965                      kube-system
	666ad3b5bbcc5       00543d2fe5d71       47 seconds ago      Running             kube-apiserver            0                   9edcf3c3e4d9e       kube-apiserver-old-k8s-version-098965            kube-system
	95905c97af2e4       762dce4090c5f       47 seconds ago      Running             kube-scheduler            0                   d6f0d280dee01       kube-scheduler-old-k8s-version-098965            kube-system
	94d7bde87dab5       46cc66ccc7c19       47 seconds ago      Running             kube-controller-manager   0                   8eb2c9f965876       kube-controller-manager-old-k8s-version-098965   kube-system
	
	
	==> containerd <==
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.893826021Z" level=info msg="Container 32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.901529017Z" level=info msg="StartContainer for \"28a52e8d1e9e4c99322bf7f4a542d09e22eed502ede9105bfd3867fff8b743ae\""
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.902763625Z" level=info msg="connecting to shim 28a52e8d1e9e4c99322bf7f4a542d09e22eed502ede9105bfd3867fff8b743ae" address="unix:///run/containerd/s/70d70892534976c42f017b6a57c07c5f882e60cfc509cf351b04e5c63883f9c6" protocol=ttrpc version=3
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.923930845Z" level=info msg="CreateContainer within sandbox \"ddcd48a171630d558701e23e8b84d43ca3b433b204586da5fd73071e2c73cf02\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388\""
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.927472048Z" level=info msg="StartContainer for \"32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388\""
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.929854567Z" level=info msg="connecting to shim 32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388" address="unix:///run/containerd/s/5ddd01c5f051ac256aede9694ac052a9c600e13f3e3f44d833556ac361f844c9" protocol=ttrpc version=3
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.986416771Z" level=info msg="StartContainer for \"28a52e8d1e9e4c99322bf7f4a542d09e22eed502ede9105bfd3867fff8b743ae\" returns successfully"
	Nov 24 03:38:01 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:01.032852607Z" level=info msg="StartContainer for \"32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388\" returns successfully"
	Nov 24 03:38:03 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:03.633430300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b377806c-ae20-44d2-9d0f-07b097026328,Namespace:default,Attempt:0,}"
	Nov 24 03:38:03 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:03.690035484Z" level=info msg="connecting to shim 07805486081686e75b51f404a8d192120c8e44f1df35435e82a18cd840b250a6" address="unix:///run/containerd/s/62c9570c9e36a3dfb4b0454e8ff44f8873d73aec0247dc7c06a4c63bdd606e84" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:38:03 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:03.757467931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b377806c-ae20-44d2-9d0f-07b097026328,Namespace:default,Attempt:0,} returns sandbox id \"07805486081686e75b51f404a8d192120c8e44f1df35435e82a18cd840b250a6\""
	Nov 24 03:38:03 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:03.759222049Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.797829117Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.799579082Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937183"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.802353497Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.809134257Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.810394309Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.051134434s"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.810432586Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.813949961Z" level=info msg="CreateContainer within sandbox \"07805486081686e75b51f404a8d192120c8e44f1df35435e82a18cd840b250a6\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.827500676Z" level=info msg="Container 2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.839960013Z" level=info msg="CreateContainer within sandbox \"07805486081686e75b51f404a8d192120c8e44f1df35435e82a18cd840b250a6\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968\""
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.841216955Z" level=info msg="StartContainer for \"2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968\""
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.842449240Z" level=info msg="connecting to shim 2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968" address="unix:///run/containerd/s/62c9570c9e36a3dfb4b0454e8ff44f8873d73aec0247dc7c06a4c63bdd606e84" protocol=ttrpc version=3
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.911309396Z" level=info msg="StartContainer for \"2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968\" returns successfully"
	Nov 24 03:38:11 old-k8s-version-098965 containerd[757]: E1124 03:38:11.465754     757 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [28a52e8d1e9e4c99322bf7f4a542d09e22eed502ede9105bfd3867fff8b743ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36019 - 46965 "HINFO IN 101273306430571101.3418018538030985896. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022963225s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-098965
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-098965
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-098965
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_37_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:37:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-098965
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:38:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:38:03 +0000   Mon, 24 Nov 2025 03:37:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:38:03 +0000   Mon, 24 Nov 2025 03:37:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:38:03 +0000   Mon, 24 Nov 2025 03:37:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:38:03 +0000   Mon, 24 Nov 2025 03:38:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-098965
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                016e6bb7-0740-4efc-ad46-1814703763df
	  Boot ID:                    63a8a852-1462-44b1-9d6f-f77d26e8568f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-5dd5756b68-2kmf2                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     26s
	  kube-system                 etcd-old-k8s-version-098965                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         39s
	  kube-system                 kindnet-mctv9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-old-k8s-version-098965             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-098965    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-5t7nq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-old-k8s-version-098965             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node old-k8s-version-098965 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-098965 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x7 over 48s)  kubelet          Node old-k8s-version-098965 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  48s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s                kubelet          Node old-k8s-version-098965 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s                kubelet          Node old-k8s-version-098965 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s                kubelet          Node old-k8s-version-098965 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           27s                node-controller  Node old-k8s-version-098965 event: Registered Node old-k8s-version-098965 in Controller
	  Normal  NodeReady                12s                kubelet          Node old-k8s-version-098965 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:27] overlayfs: idmapped layers are currently not supported
	[Nov24 02:28] overlayfs: idmapped layers are currently not supported
	[Nov24 02:30] overlayfs: idmapped layers are currently not supported
	[  +9.824160] overlayfs: idmapped layers are currently not supported
	[Nov24 02:31] overlayfs: idmapped layers are currently not supported
	[Nov24 02:32] overlayfs: idmapped layers are currently not supported
	[ +27.981383] overlayfs: idmapped layers are currently not supported
	[Nov24 02:33] overlayfs: idmapped layers are currently not supported
	[Nov24 02:34] overlayfs: idmapped layers are currently not supported
	[Nov24 02:35] overlayfs: idmapped layers are currently not supported
	[Nov24 02:36] overlayfs: idmapped layers are currently not supported
	[Nov24 02:37] overlayfs: idmapped layers are currently not supported
	[Nov24 02:38] overlayfs: idmapped layers are currently not supported
	[Nov24 02:39] overlayfs: idmapped layers are currently not supported
	[ +24.837346] overlayfs: idmapped layers are currently not supported
	[Nov24 02:40] overlayfs: idmapped layers are currently not supported
	[ +40.823948] overlayfs: idmapped layers are currently not supported
	[  +1.705989] overlayfs: idmapped layers are currently not supported
	[Nov24 02:42] overlayfs: idmapped layers are currently not supported
	[ +21.661904] overlayfs: idmapped layers are currently not supported
	[Nov24 02:44] overlayfs: idmapped layers are currently not supported
	[  +1.074777] overlayfs: idmapped layers are currently not supported
	[Nov24 02:46] overlayfs: idmapped layers are currently not supported
	[ +19.120392] overlayfs: idmapped layers are currently not supported
	[Nov24 02:48] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8fb25b361e0239913db0778bdfb64d93fee6d1a16be3fd7f4f316e46a892bbde] <==
	{"level":"info","ts":"2025-11-24T03:37:25.43693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-24T03:37:25.437104Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-24T03:37:25.441421Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T03:37:25.44164Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T03:37:25.441821Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T03:37:25.445077Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T03:37:25.445165Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T03:37:25.495636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T03:37:25.495889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T03:37:25.496002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-24T03:37:25.496143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T03:37:25.496408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T03:37:25.497335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-24T03:37:25.497479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T03:37:25.498979Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-098965 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T03:37:25.499256Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:37:25.50078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-24T03:37:25.500939Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:37:25.501272Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:37:25.503825Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T03:37:25.502905Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T03:37:25.504027Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T03:37:25.506444Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:37:25.506667Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:37:25.506735Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 03:38:12 up  2:20,  0 user,  load average: 1.98, 3.10, 2.74
	Linux old-k8s-version-098965 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [37f20e76ffbc24c2b929d70181ec4667f979dd10e9528ae0a376dca755a608bd] <==
	I1124 03:37:49.827895       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:37:49.828142       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:37:49.828290       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:37:49.828302       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:37:49.828312       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:37:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:37:50.033133       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:37:50.033241       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:37:50.033288       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:37:50.034681       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:37:50.324571       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:37:50.324658       1 metrics.go:72] Registering metrics
	I1124 03:37:50.324749       1 controller.go:711] "Syncing nftables rules"
	I1124 03:38:00.040225       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:38:00.040277       1 main.go:301] handling current node
	I1124 03:38:10.032819       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:38:10.033055       1 main.go:301] handling current node
	
	
	==> kube-apiserver [666ad3b5bbcc57cef3344095ab7c6a95424fcdae77e237214b172a62b87abb2e] <==
	I1124 03:37:29.817316       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 03:37:29.821018       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 03:37:29.821061       1 aggregator.go:166] initial CRD sync complete...
	I1124 03:37:29.821069       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 03:37:29.821233       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:37:29.821307       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:37:29.822681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:37:29.854244       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:37:29.879343       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 03:37:29.891373       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1124 03:37:30.501665       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:37:30.515842       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:37:30.515870       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:37:31.168083       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:37:31.220692       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:37:31.327576       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:37:31.335539       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:37:31.336837       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 03:37:31.342035       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:37:31.795879       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 03:37:32.895003       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 03:37:32.910864       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:37:32.928122       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 03:37:45.687691       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 03:37:46.683285       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [94d7bde87dab52f8ec3b1763043f2afa14f31bf91ba4ddd110aa3c091eb1f236] <==
	I1124 03:37:45.830060       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1124 03:37:45.830629       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-098965"
	I1124 03:37:45.832023       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1124 03:37:45.830221       1 event.go:307] "Event occurred" object="old-k8s-version-098965" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-098965 event: Registered Node old-k8s-version-098965 in Controller"
	I1124 03:37:46.234151       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:37:46.277193       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:37:46.277382       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 03:37:46.504268       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xqjm9"
	I1124 03:37:46.531473       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2kmf2"
	I1124 03:37:46.546744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="854.357749ms"
	I1124 03:37:46.566310       1 event.go:307] "Event occurred" object="kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kube-system/kube-dns: endpoints \"kube-dns\" already exists"
	I1124 03:37:46.584884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="38.082715ms"
	I1124 03:37:46.585113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="149.391µs"
	I1124 03:37:46.696833       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5t7nq"
	I1124 03:37:46.703751       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mctv9"
	I1124 03:37:48.352432       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 03:37:48.387265       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xqjm9"
	I1124 03:37:48.403262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.894657ms"
	I1124 03:37:48.414134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.824275ms"
	I1124 03:37:48.414238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.528µs"
	I1124 03:38:00.391250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="131.267µs"
	I1124 03:38:00.449093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.589µs"
	I1124 03:38:00.836415       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1124 03:38:01.292788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.437073ms"
	I1124 03:38:01.294027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.155904ms"
	
	
	==> kube-proxy [4baa8c107b38cc2761e31cd050e33ec89802d4aa44bd4f1d1d031950a9d835ec] <==
	I1124 03:37:47.752353       1 server_others.go:69] "Using iptables proxy"
	I1124 03:37:47.775066       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1124 03:37:47.844709       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:37:47.849188       1 server_others.go:152] "Using iptables Proxier"
	I1124 03:37:47.849234       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 03:37:47.849286       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 03:37:47.849319       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 03:37:47.849526       1 server.go:846] "Version info" version="v1.28.0"
	I1124 03:37:47.849543       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:37:47.851283       1 config.go:188] "Starting service config controller"
	I1124 03:37:47.851308       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 03:37:47.851328       1 config.go:97] "Starting endpoint slice config controller"
	I1124 03:37:47.851333       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 03:37:47.851909       1 config.go:315] "Starting node config controller"
	I1124 03:37:47.851919       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 03:37:47.952223       1 shared_informer.go:318] Caches are synced for node config
	I1124 03:37:47.952255       1 shared_informer.go:318] Caches are synced for service config
	I1124 03:37:47.952281       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [95905c97af2e4e393feeaef2edf3e1c7c5fc6dcb11cccf3554a17255c56bd15d] <==
	W1124 03:37:29.836095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 03:37:29.836123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 03:37:30.637685       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 03:37:30.637945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 03:37:30.642715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 03:37:30.642753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 03:37:30.708776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 03:37:30.708817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 03:37:30.711532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 03:37:30.711569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 03:37:30.717417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 03:37:30.717460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 03:37:30.738383       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 03:37:30.738423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 03:37:30.770745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 03:37:30.770991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 03:37:30.836552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 03:37:30.836594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 03:37:30.842629       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1124 03:37:30.842859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1124 03:37:30.843777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 03:37:30.843981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 03:37:30.921894       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 03:37:30.922102       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1124 03:37:33.702680       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 03:37:45 old-k8s-version-098965 kubelet[1540]: I1124 03:37:45.681266    1540 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.713561    1540 topology_manager.go:215] "Topology Admit Handler" podUID="6050bdb0-6390-48c7-863f-520ef6277ad8" podNamespace="kube-system" podName="kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.716233    1540 topology_manager.go:215] "Topology Admit Handler" podUID="0f0d91cd-7d64-482e-b33c-383b20f5bd79" podNamespace="kube-system" podName="kindnet-mctv9"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.767542    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0f0d91cd-7d64-482e-b33c-383b20f5bd79-cni-cfg\") pod \"kindnet-mctv9\" (UID: \"0f0d91cd-7d64-482e-b33c-383b20f5bd79\") " pod="kube-system/kindnet-mctv9"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.767756    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f0d91cd-7d64-482e-b33c-383b20f5bd79-xtables-lock\") pod \"kindnet-mctv9\" (UID: \"0f0d91cd-7d64-482e-b33c-383b20f5bd79\") " pod="kube-system/kindnet-mctv9"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.767863    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgdbr\" (UniqueName: \"kubernetes.io/projected/0f0d91cd-7d64-482e-b33c-383b20f5bd79-kube-api-access-tgdbr\") pod \"kindnet-mctv9\" (UID: \"0f0d91cd-7d64-482e-b33c-383b20f5bd79\") " pod="kube-system/kindnet-mctv9"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.767964    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6050bdb0-6390-48c7-863f-520ef6277ad8-xtables-lock\") pod \"kube-proxy-5t7nq\" (UID: \"6050bdb0-6390-48c7-863f-520ef6277ad8\") " pod="kube-system/kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.768057    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6050bdb0-6390-48c7-863f-520ef6277ad8-lib-modules\") pod \"kube-proxy-5t7nq\" (UID: \"6050bdb0-6390-48c7-863f-520ef6277ad8\") " pod="kube-system/kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.768153    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnmtw\" (UniqueName: \"kubernetes.io/projected/6050bdb0-6390-48c7-863f-520ef6277ad8-kube-api-access-dnmtw\") pod \"kube-proxy-5t7nq\" (UID: \"6050bdb0-6390-48c7-863f-520ef6277ad8\") " pod="kube-system/kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.768259    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6050bdb0-6390-48c7-863f-520ef6277ad8-kube-proxy\") pod \"kube-proxy-5t7nq\" (UID: \"6050bdb0-6390-48c7-863f-520ef6277ad8\") " pod="kube-system/kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.768359    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f0d91cd-7d64-482e-b33c-383b20f5bd79-lib-modules\") pod \"kindnet-mctv9\" (UID: \"0f0d91cd-7d64-482e-b33c-383b20f5bd79\") " pod="kube-system/kindnet-mctv9"
	Nov 24 03:37:50 old-k8s-version-098965 kubelet[1540]: I1124 03:37:50.218063    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5t7nq" podStartSLOduration=4.2180200469999996 podCreationTimestamp="2025-11-24 03:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:37:48.208401476 +0000 UTC m=+15.367432854" watchObservedRunningTime="2025-11-24 03:37:50.218020047 +0000 UTC m=+17.377051399"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.190884    1540 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.379008    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mctv9" podStartSLOduration=12.155727494 podCreationTimestamp="2025-11-24 03:37:46 +0000 UTC" firstStartedPulling="2025-11-24 03:37:47.330956178 +0000 UTC m=+14.489987539" lastFinishedPulling="2025-11-24 03:37:49.554182857 +0000 UTC m=+16.713214218" observedRunningTime="2025-11-24 03:37:50.219024146 +0000 UTC m=+17.378055507" watchObservedRunningTime="2025-11-24 03:38:00.378954173 +0000 UTC m=+27.537985543"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.379210    1540 topology_manager.go:215] "Topology Admit Handler" podUID="9ede1da5-704c-4aab-93e0-77ce93158129" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.394275    1540 topology_manager.go:215] "Topology Admit Handler" podUID="9c6642fb-17b7-4199-b927-eb63b9a58260" podNamespace="kube-system" podName="coredns-5dd5756b68-2kmf2"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.504386    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgg48\" (UniqueName: \"kubernetes.io/projected/9c6642fb-17b7-4199-b927-eb63b9a58260-kube-api-access-fgg48\") pod \"coredns-5dd5756b68-2kmf2\" (UID: \"9c6642fb-17b7-4199-b927-eb63b9a58260\") " pod="kube-system/coredns-5dd5756b68-2kmf2"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.504451    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6642fb-17b7-4199-b927-eb63b9a58260-config-volume\") pod \"coredns-5dd5756b68-2kmf2\" (UID: \"9c6642fb-17b7-4199-b927-eb63b9a58260\") " pod="kube-system/coredns-5dd5756b68-2kmf2"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.504532    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snrh\" (UniqueName: \"kubernetes.io/projected/9ede1da5-704c-4aab-93e0-77ce93158129-kube-api-access-8snrh\") pod \"storage-provisioner\" (UID: \"9ede1da5-704c-4aab-93e0-77ce93158129\") " pod="kube-system/storage-provisioner"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.504567    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9ede1da5-704c-4aab-93e0-77ce93158129-tmp\") pod \"storage-provisioner\" (UID: \"9ede1da5-704c-4aab-93e0-77ce93158129\") " pod="kube-system/storage-provisioner"
	Nov 24 03:38:01 old-k8s-version-098965 kubelet[1540]: I1124 03:38:01.277737    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.277693839 podCreationTimestamp="2025-11-24 03:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:38:01.256250253 +0000 UTC m=+28.415281605" watchObservedRunningTime="2025-11-24 03:38:01.277693839 +0000 UTC m=+28.436725192"
	Nov 24 03:38:03 old-k8s-version-098965 kubelet[1540]: I1124 03:38:03.329633    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2kmf2" podStartSLOduration=17.329588381 podCreationTimestamp="2025-11-24 03:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:38:01.279992944 +0000 UTC m=+28.439024297" watchObservedRunningTime="2025-11-24 03:38:03.329588381 +0000 UTC m=+30.488619734"
	Nov 24 03:38:03 old-k8s-version-098965 kubelet[1540]: I1124 03:38:03.329845    1540 topology_manager.go:215] "Topology Admit Handler" podUID="b377806c-ae20-44d2-9d0f-07b097026328" podNamespace="default" podName="busybox"
	Nov 24 03:38:03 old-k8s-version-098965 kubelet[1540]: I1124 03:38:03.426801    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2qh\" (UniqueName: \"kubernetes.io/projected/b377806c-ae20-44d2-9d0f-07b097026328-kube-api-access-wn2qh\") pod \"busybox\" (UID: \"b377806c-ae20-44d2-9d0f-07b097026328\") " pod="default/busybox"
	Nov 24 03:38:06 old-k8s-version-098965 kubelet[1540]: I1124 03:38:06.274643    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.222747006 podCreationTimestamp="2025-11-24 03:38:03 +0000 UTC" firstStartedPulling="2025-11-24 03:38:03.75886784 +0000 UTC m=+30.917899193" lastFinishedPulling="2025-11-24 03:38:05.810715943 +0000 UTC m=+32.969747296" observedRunningTime="2025-11-24 03:38:06.27449371 +0000 UTC m=+33.433525071" watchObservedRunningTime="2025-11-24 03:38:06.274595109 +0000 UTC m=+33.433626495"
	
	
	==> storage-provisioner [32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388] <==
	I1124 03:38:01.039603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:38:01.054106       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:38:01.054328       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 03:38:01.064918       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:38:01.065095       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-098965_e3e8caf0-85bd-4d0b-af08-80a33b7d616e!
	I1124 03:38:01.066102       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c304fee8-eb73-4695-8997-27ec70001b31", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-098965_e3e8caf0-85bd-4d0b-af08-80a33b7d616e became leader
	I1124 03:38:01.165252       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-098965_e3e8caf0-85bd-4d0b-af08-80a33b7d616e!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098965 -n old-k8s-version-098965
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-098965 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-098965
helpers_test.go:243: (dbg) docker inspect old-k8s-version-098965:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022",
	        "Created": "2025-11-24T03:37:06.167962609Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 457210,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:37:06.24041942Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022/hostname",
	        "HostsPath": "/var/lib/docker/containers/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022/hosts",
	        "LogPath": "/var/lib/docker/containers/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022/51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022-json.log",
	        "Name": "/old-k8s-version-098965",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-098965:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-098965",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "51b62bc50b581270fcb4bc2e1c574a9a6681d89c3887762aa06dd29ac0c65022",
	                "LowerDir": "/var/lib/docker/overlay2/8effb39b7e48dc2e06628c564f9eb8d7a6134b67b474f4243a9f92d81eed72e6-init/diff:/var/lib/docker/overlay2/11b197f530f0d571f61892814d8d4c774f7d3e5a97abdd8c5aa182cc99b2d856/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8effb39b7e48dc2e06628c564f9eb8d7a6134b67b474f4243a9f92d81eed72e6/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8effb39b7e48dc2e06628c564f9eb8d7a6134b67b474f4243a9f92d81eed72e6/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8effb39b7e48dc2e06628c564f9eb8d7a6134b67b474f4243a9f92d81eed72e6/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-098965",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-098965/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-098965",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-098965",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-098965",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1fdc3bd4111da77a7219abec40237713d3aafb5294361ea9ac940f031b5e9874",
	            "SandboxKey": "/var/run/docker/netns/1fdc3bd4111d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33418"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33419"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33422"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33420"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33421"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-098965": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:8b:8f:f7:48:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a787be3020cdc92e0572d92f4bf90ce3f3c7948fc2d2deef82cd4a5f099c319a",
	                    "EndpointID": "0c39f7b7035a14f48a48394a60897ac0eb2db5edb711c6ca54097ce4804ab54d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-098965",
	                        "51b62bc50b58"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-098965 -n old-k8s-version-098965
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-098965 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-098965 logs -n 25: (1.289253666s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-842431 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo containerd config dump                                                                                                                                                                                                        │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ ssh     │ -p cilium-842431 sudo crio config                                                                                                                                                                                                                   │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ delete  │ -p cilium-842431                                                                                                                                                                                                                                    │ cilium-842431             │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:35 UTC │
	│ start   │ -p force-systemd-env-574539 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:35 UTC │
	│ delete  │ -p kubernetes-upgrade-850960                                                                                                                                                                                                                        │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ force-systemd-env-574539 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p force-systemd-env-574539                                                                                                                                                                                                                         │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-options-216763 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ cert-options-216763 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ -p cert-options-216763 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p cert-options-216763                                                                                                                                                                                                                              │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:38 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:36:59
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:36:59.219087  456828 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:36:59.219245  456828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:36:59.219258  456828 out.go:374] Setting ErrFile to fd 2...
	I1124 03:36:59.219263  456828 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:36:59.219545  456828 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:36:59.220001  456828 out.go:368] Setting JSON to false
	I1124 03:36:59.221277  456828 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8348,"bootTime":1763947072,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:36:59.221357  456828 start.go:143] virtualization:  
	I1124 03:36:59.227637  456828 out.go:179] * [old-k8s-version-098965] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:36:59.231151  456828 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:36:59.231327  456828 notify.go:221] Checking for updates...
	I1124 03:36:59.238122  456828 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:36:59.241423  456828 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:36:59.244671  456828 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:36:59.247794  456828 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:36:59.250835  456828 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:36:59.254529  456828 config.go:182] Loaded profile config "cert-expiration-846384": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:36:59.254702  456828 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:36:59.294116  456828 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:36:59.294240  456828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:36:59.357118  456828 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:36:59.346945612 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:36:59.357230  456828 docker.go:319] overlay module found
	I1124 03:36:59.360704  456828 out.go:179] * Using the docker driver based on user configuration
	I1124 03:36:59.363700  456828 start.go:309] selected driver: docker
	I1124 03:36:59.363727  456828 start.go:927] validating driver "docker" against <nil>
	I1124 03:36:59.363759  456828 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:36:59.364561  456828 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:36:59.415697  456828 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:36:59.406291614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:36:59.415854  456828 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:36:59.416110  456828 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:36:59.419249  456828 out.go:179] * Using Docker driver with root privileges
	I1124 03:36:59.422257  456828 cni.go:84] Creating CNI manager for ""
	I1124 03:36:59.422344  456828 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:36:59.422359  456828 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:36:59.422448  456828 start.go:353] cluster config:
	{Name:old-k8s-version-098965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:36:59.425538  456828 out.go:179] * Starting "old-k8s-version-098965" primary control-plane node in "old-k8s-version-098965" cluster
	I1124 03:36:59.428289  456828 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:36:59.431323  456828 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:36:59.434113  456828 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:36:59.434165  456828 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1124 03:36:59.434176  456828 cache.go:65] Caching tarball of preloaded images
	I1124 03:36:59.434207  456828 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:36:59.434261  456828 preload.go:238] Found /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 03:36:59.434272  456828 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1124 03:36:59.434383  456828 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/config.json ...
	I1124 03:36:59.434401  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/config.json: {Name:mk69515acb07727840b36c87604cba4bd531db8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:36:59.454350  456828 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:36:59.454376  456828 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:36:59.454396  456828 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:36:59.454427  456828 start.go:360] acquireMachinesLock for old-k8s-version-098965: {Name:mkfaf6c0e20ffd0f03bcaf5e2568b90f1af41e0c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:36:59.454522  456828 start.go:364] duration metric: took 80.46µs to acquireMachinesLock for "old-k8s-version-098965"
	I1124 03:36:59.454546  456828 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-098965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:36:59.454620  456828 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:36:59.460555  456828 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:36:59.460925  456828 start.go:159] libmachine.API.Create for "old-k8s-version-098965" (driver="docker")
	I1124 03:36:59.460965  456828 client.go:173] LocalClient.Create starting
	I1124 03:36:59.461101  456828 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem
	I1124 03:36:59.461190  456828 main.go:143] libmachine: Decoding PEM data...
	I1124 03:36:59.461213  456828 main.go:143] libmachine: Parsing certificate...
	I1124 03:36:59.461265  456828 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem
	I1124 03:36:59.461289  456828 main.go:143] libmachine: Decoding PEM data...
	I1124 03:36:59.461301  456828 main.go:143] libmachine: Parsing certificate...
	I1124 03:36:59.461680  456828 cli_runner.go:164] Run: docker network inspect old-k8s-version-098965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:36:59.477952  456828 cli_runner.go:211] docker network inspect old-k8s-version-098965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:36:59.478060  456828 network_create.go:284] running [docker network inspect old-k8s-version-098965] to gather additional debugging logs...
	I1124 03:36:59.478084  456828 cli_runner.go:164] Run: docker network inspect old-k8s-version-098965
	W1124 03:36:59.501264  456828 cli_runner.go:211] docker network inspect old-k8s-version-098965 returned with exit code 1
	I1124 03:36:59.501311  456828 network_create.go:287] error running [docker network inspect old-k8s-version-098965]: docker network inspect old-k8s-version-098965: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-098965 not found
	I1124 03:36:59.501326  456828 network_create.go:289] output of [docker network inspect old-k8s-version-098965]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-098965 not found
	
	** /stderr **
	I1124 03:36:59.501444  456828 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:36:59.520261  456828 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-752aaa40bb3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:00:20:e4:71:15} reservation:<nil>}
	I1124 03:36:59.520804  456828 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbb0dee281db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:ff:07:3e:91:0f} reservation:<nil>}
	I1124 03:36:59.521086  456828 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d95ffec60547 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:b5:f2:ed:07:1e} reservation:<nil>}
	I1124 03:36:59.521451  456828 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-1b3e5c8c3c27 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7e:8b:7b:bd:23:4e} reservation:<nil>}
	I1124 03:36:59.521977  456828 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a3e7d0}
	I1124 03:36:59.522000  456828 network_create.go:124] attempt to create docker network old-k8s-version-098965 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 03:36:59.522073  456828 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-098965 old-k8s-version-098965
	I1124 03:36:59.585194  456828 network_create.go:108] docker network old-k8s-version-098965 192.168.85.0/24 created
	I1124 03:36:59.585224  456828 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-098965" container
	I1124 03:36:59.585319  456828 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:36:59.601540  456828 cli_runner.go:164] Run: docker volume create old-k8s-version-098965 --label name.minikube.sigs.k8s.io=old-k8s-version-098965 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:36:59.619479  456828 oci.go:103] Successfully created a docker volume old-k8s-version-098965
	I1124 03:36:59.619575  456828 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-098965-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-098965 --entrypoint /usr/bin/test -v old-k8s-version-098965:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:37:00.531368  456828 oci.go:107] Successfully prepared a docker volume old-k8s-version-098965
	I1124 03:37:00.531432  456828 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:37:00.531457  456828 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:37:00.531528  456828 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-098965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:37:06.094759  456828 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-098965:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (5.56319468s)
	I1124 03:37:06.094794  456828 kic.go:203] duration metric: took 5.563348412s to extract preloaded images to volume ...
	W1124 03:37:06.094942  456828 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:37:06.095054  456828 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:37:06.151713  456828 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-098965 --name old-k8s-version-098965 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-098965 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-098965 --network old-k8s-version-098965 --ip 192.168.85.2 --volume old-k8s-version-098965:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:37:06.468910  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Running}}
	I1124 03:37:06.491115  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:06.525628  456828 cli_runner.go:164] Run: docker exec old-k8s-version-098965 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:37:06.579576  456828 oci.go:144] the created container "old-k8s-version-098965" has a running status.
	I1124 03:37:06.579609  456828 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa...
	I1124 03:37:06.729919  456828 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:37:06.749775  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:06.775133  456828 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:37:06.775157  456828 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-098965 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:37:06.826229  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:06.849606  456828 machine.go:94] provisionDockerMachine start ...
	I1124 03:37:06.849724  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:06.876665  456828 main.go:143] libmachine: Using SSH client type: native
	I1124 03:37:06.877012  456828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1124 03:37:06.877028  456828 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:37:06.877699  456828 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43434->127.0.0.1:33418: read: connection reset by peer
	I1124 03:37:10.029639  456828 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098965
	
	I1124 03:37:10.029683  456828 ubuntu.go:182] provisioning hostname "old-k8s-version-098965"
	I1124 03:37:10.029771  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:10.053348  456828 main.go:143] libmachine: Using SSH client type: native
	I1124 03:37:10.053702  456828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1124 03:37:10.053721  456828 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-098965 && echo "old-k8s-version-098965" | sudo tee /etc/hostname
	I1124 03:37:10.214587  456828 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-098965
	
	I1124 03:37:10.214746  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:10.234062  456828 main.go:143] libmachine: Using SSH client type: native
	I1124 03:37:10.234384  456828 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33418 <nil> <nil>}
	I1124 03:37:10.234408  456828 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-098965' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-098965/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-098965' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:37:10.384740  456828 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:37:10.384768  456828 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-255205/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-255205/.minikube}
	I1124 03:37:10.384789  456828 ubuntu.go:190] setting up certificates
	I1124 03:37:10.384814  456828 provision.go:84] configureAuth start
	I1124 03:37:10.384887  456828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098965
	I1124 03:37:10.402644  456828 provision.go:143] copyHostCerts
	I1124 03:37:10.402723  456828 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem, removing ...
	I1124 03:37:10.402738  456828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem
	I1124 03:37:10.402815  456828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem (1078 bytes)
	I1124 03:37:10.402908  456828 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem, removing ...
	I1124 03:37:10.402916  456828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem
	I1124 03:37:10.402943  456828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem (1123 bytes)
	I1124 03:37:10.403038  456828 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem, removing ...
	I1124 03:37:10.403049  456828 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem
	I1124 03:37:10.403076  456828 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem (1675 bytes)
	I1124 03:37:10.403135  456828 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-098965 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-098965]
	I1124 03:37:10.629214  456828 provision.go:177] copyRemoteCerts
	I1124 03:37:10.629287  456828 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:37:10.629356  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:10.650240  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:10.757128  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:37:10.776341  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 03:37:10.795643  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:37:10.814914  456828 provision.go:87] duration metric: took 430.069497ms to configureAuth
	I1124 03:37:10.814945  456828 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:37:10.815151  456828 config.go:182] Loaded profile config "old-k8s-version-098965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:37:10.815164  456828 machine.go:97] duration metric: took 3.965526971s to provisionDockerMachine
	I1124 03:37:10.815172  456828 client.go:176] duration metric: took 11.35420095s to LocalClient.Create
	I1124 03:37:10.815193  456828 start.go:167] duration metric: took 11.354269562s to libmachine.API.Create "old-k8s-version-098965"
	I1124 03:37:10.815206  456828 start.go:293] postStartSetup for "old-k8s-version-098965" (driver="docker")
	I1124 03:37:10.815216  456828 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:37:10.815268  456828 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:37:10.815313  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:10.835952  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:10.940940  456828 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:37:10.944392  456828 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:37:10.944420  456828 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:37:10.944432  456828 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/addons for local assets ...
	I1124 03:37:10.944531  456828 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/files for local assets ...
	I1124 03:37:10.944626  456828 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem -> 2570692.pem in /etc/ssl/certs
	I1124 03:37:10.944739  456828 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:37:10.953432  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:37:10.971988  456828 start.go:296] duration metric: took 156.749021ms for postStartSetup
	I1124 03:37:10.972390  456828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098965
	I1124 03:37:10.990273  456828 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/config.json ...
	I1124 03:37:10.990577  456828 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:37:10.990655  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:11.011483  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:11.114126  456828 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:37:11.120113  456828 start.go:128] duration metric: took 11.665472167s to createHost
	I1124 03:37:11.120148  456828 start.go:83] releasing machines lock for "old-k8s-version-098965", held for 11.665617423s
	I1124 03:37:11.120263  456828 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-098965
	I1124 03:37:11.139454  456828 ssh_runner.go:195] Run: cat /version.json
	I1124 03:37:11.139475  456828 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:37:11.139509  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:11.139546  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:11.159657  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:11.178004  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:11.355366  456828 ssh_runner.go:195] Run: systemctl --version
	I1124 03:37:11.362316  456828 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:37:11.366848  456828 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:37:11.366938  456828 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:37:11.395828  456828 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:37:11.395905  456828 start.go:496] detecting cgroup driver to use...
	I1124 03:37:11.395958  456828 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:37:11.396051  456828 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:37:11.412427  456828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:37:11.425664  456828 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:37:11.425739  456828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:37:11.443137  456828 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:37:11.466719  456828 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:37:11.592922  456828 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:37:11.733496  456828 docker.go:234] disabling docker service ...
	I1124 03:37:11.733625  456828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:37:11.756653  456828 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:37:11.773475  456828 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:37:11.921229  456828 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:37:12.062701  456828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:37:12.076946  456828 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:37:12.092118  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1124 03:37:12.101736  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:37:12.111290  456828 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 03:37:12.111365  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 03:37:12.120980  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:37:12.130335  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:37:12.139831  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:37:12.149028  456828 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:37:12.158976  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:37:12.168289  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:37:12.179157  456828 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:37:12.189127  456828 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:37:12.196909  456828 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:37:12.204634  456828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:37:12.341578  456828 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:37:12.476923  456828 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:37:12.477008  456828 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:37:12.481370  456828 start.go:564] Will wait 60s for crictl version
	I1124 03:37:12.481445  456828 ssh_runner.go:195] Run: which crictl
	I1124 03:37:12.485391  456828 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:37:12.516194  456828 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:37:12.516279  456828 ssh_runner.go:195] Run: containerd --version
	I1124 03:37:12.539070  456828 ssh_runner.go:195] Run: containerd --version
	I1124 03:37:12.568276  456828 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1124 03:37:12.571173  456828 cli_runner.go:164] Run: docker network inspect old-k8s-version-098965 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:37:12.589022  456828 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 03:37:12.593373  456828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:37:12.603275  456828 kubeadm.go:884] updating cluster {Name:old-k8s-version-098965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:37:12.603400  456828 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 03:37:12.603468  456828 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:37:12.632602  456828 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:37:12.632625  456828 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:37:12.632687  456828 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:37:12.658408  456828 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:37:12.658429  456828 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:37:12.658437  456828 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1124 03:37:12.658540  456828 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-098965 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:37:12.658617  456828 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:37:12.684758  456828 cni.go:84] Creating CNI manager for ""
	I1124 03:37:12.684785  456828 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:37:12.684799  456828 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:37:12.684823  456828 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-098965 NodeName:old-k8s-version-098965 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:37:12.684947  456828 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-098965"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:37:12.685018  456828 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 03:37:12.693523  456828 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:37:12.693645  456828 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:37:12.702042  456828 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1124 03:37:12.715991  456828 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:37:12.729770  456828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1124 03:37:12.743578  456828 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:37:12.747534  456828 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:37:12.757715  456828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:37:12.884405  456828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:37:12.905132  456828 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965 for IP: 192.168.85.2
	I1124 03:37:12.905158  456828 certs.go:195] generating shared ca certs ...
	I1124 03:37:12.905175  456828 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:12.905388  456828 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:37:12.905463  456828 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:37:12.905478  456828 certs.go:257] generating profile certs ...
	I1124 03:37:12.905558  456828 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.key
	I1124 03:37:12.905577  456828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt with IP's: []
	I1124 03:37:13.092952  456828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt ...
	I1124 03:37:13.092989  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: {Name:mkdd0fa6209ccf6aa2aa41557354bcbc75868f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.093227  456828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.key ...
	I1124 03:37:13.093245  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.key: {Name:mk9ee935a6f1a8dd6673b97d66ec46cca5ad1664 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.093351  456828 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key.56338243
	I1124 03:37:13.093373  456828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt.56338243 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:37:13.449033  456828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt.56338243 ...
	I1124 03:37:13.449067  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt.56338243: {Name:mk65bad85814a0a12971d39286d0e5c451efbbb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.449251  456828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key.56338243 ...
	I1124 03:37:13.449269  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key.56338243: {Name:mk64b57d615101ac92823627ae52dbd8c44bfea8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.449353  456828 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt.56338243 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt
	I1124 03:37:13.449436  456828 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key.56338243 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key
	I1124 03:37:13.449500  456828 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.key
	I1124 03:37:13.449520  456828 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.crt with IP's: []
	I1124 03:37:13.614481  456828 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.crt ...
	I1124 03:37:13.614513  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.crt: {Name:mk7045112c74be0d05a12bbf47e455d86596546e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.614698  456828 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.key ...
	I1124 03:37:13.614719  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.key: {Name:mkbadb2ce2b4c7ecb9f7755942cb7ff8139714e8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:13.614923  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:37:13.614972  456828 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:37:13.614987  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:37:13.615022  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:37:13.615052  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:37:13.615080  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:37:13.615129  456828 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:37:13.615732  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:37:13.637058  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:37:13.656196  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:37:13.675513  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:37:13.694987  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 03:37:13.714921  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:37:13.733764  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:37:13.753359  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:37:13.772856  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:37:13.791271  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:37:13.810674  456828 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:37:13.830260  456828 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:37:13.845031  456828 ssh_runner.go:195] Run: openssl version
	I1124 03:37:13.854200  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:37:13.864090  456828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:37:13.868253  456828 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:37:13.868333  456828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:37:13.910904  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:37:13.920025  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:37:13.928734  456828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:37:13.932666  456828 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:37:13.932765  456828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:37:13.979663  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:37:13.988918  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:37:13.998028  456828 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:37:14.003766  456828 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:37:14.003942  456828 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:37:14.050814  456828 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:37:14.059590  456828 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:37:14.063378  456828 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:37:14.063446  456828 kubeadm.go:401] StartCluster: {Name:old-k8s-version-098965 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-098965 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:37:14.063519  456828 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:37:14.063584  456828 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:37:14.095036  456828 cri.go:89] found id: ""
	I1124 03:37:14.095112  456828 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:37:14.103077  456828 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:37:14.111415  456828 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:37:14.111511  456828 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:37:14.120533  456828 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:37:14.120557  456828 kubeadm.go:158] found existing configuration files:
	
	I1124 03:37:14.120636  456828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:37:14.129223  456828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:37:14.129299  456828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:37:14.137169  456828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:37:14.145264  456828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:37:14.145326  456828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:37:14.153440  456828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:37:14.161802  456828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:37:14.161868  456828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:37:14.169170  456828 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:37:14.176729  456828 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:37:14.176794  456828 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:37:14.184164  456828 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:37:14.235215  456828 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 03:37:14.235279  456828 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:37:14.275786  456828 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:37:14.275863  456828 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:37:14.275904  456828 kubeadm.go:319] OS: Linux
	I1124 03:37:14.275954  456828 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:37:14.276007  456828 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:37:14.276058  456828 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:37:14.276111  456828 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:37:14.276161  456828 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:37:14.276213  456828 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:37:14.276262  456828 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:37:14.276314  456828 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:37:14.276364  456828 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:37:14.361031  456828 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:37:14.361200  456828 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:37:14.361333  456828 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 03:37:14.534299  456828 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:37:14.537587  456828 out.go:252]   - Generating certificates and keys ...
	I1124 03:37:14.537751  456828 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:37:14.537876  456828 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:37:15.136064  456828 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:37:15.790461  456828 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:37:16.745198  456828 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:37:17.101081  456828 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:37:17.816844  456828 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:37:17.817225  456828 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-098965] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:37:18.708622  456828 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:37:18.708941  456828 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-098965] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:37:19.626997  456828 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:37:20.013744  456828 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:37:21.332223  456828 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:37:21.333010  456828 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:37:21.538924  456828 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:37:21.950934  456828 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:37:23.178695  456828 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:37:23.307692  456828 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:37:23.308662  456828 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:37:23.312055  456828 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:37:23.317674  456828 out.go:252]   - Booting up control plane ...
	I1124 03:37:23.317788  456828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:37:23.317867  456828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:37:23.317934  456828 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:37:23.338190  456828 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:37:23.339603  456828 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:37:23.339662  456828 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:37:23.480314  456828 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 03:37:31.483609  456828 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.003694 seconds
	I1124 03:37:31.483744  456828 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:37:31.502417  456828 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:37:32.033208  456828 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:37:32.033430  456828 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-098965 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:37:32.546541  456828 kubeadm.go:319] [bootstrap-token] Using token: ycw9qc.7i65x4n1zr1z1k2d
	I1124 03:37:32.549515  456828 out.go:252]   - Configuring RBAC rules ...
	I1124 03:37:32.549646  456828 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:37:32.555568  456828 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:37:32.568444  456828 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:37:32.576351  456828 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:37:32.581974  456828 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:37:32.586465  456828 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:37:32.604043  456828 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:37:32.913255  456828 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:37:32.963682  456828 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:37:32.981293  456828 kubeadm.go:319] 
	I1124 03:37:32.981375  456828 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:37:32.981391  456828 kubeadm.go:319] 
	I1124 03:37:32.981470  456828 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:37:32.981490  456828 kubeadm.go:319] 
	I1124 03:37:32.981515  456828 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:37:32.983497  456828 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:37:32.983565  456828 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:37:32.983572  456828 kubeadm.go:319] 
	I1124 03:37:32.983653  456828 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:37:32.983663  456828 kubeadm.go:319] 
	I1124 03:37:32.983715  456828 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:37:32.983723  456828 kubeadm.go:319] 
	I1124 03:37:32.983775  456828 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:37:32.983853  456828 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:37:32.983929  456828 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:37:32.983938  456828 kubeadm.go:319] 
	I1124 03:37:32.984037  456828 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:37:32.984117  456828 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:37:32.984123  456828 kubeadm.go:319] 
	I1124 03:37:32.984216  456828 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ycw9qc.7i65x4n1zr1z1k2d \
	I1124 03:37:32.984336  456828 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:37:32.984390  456828 kubeadm.go:319] 	--control-plane 
	I1124 03:37:32.984397  456828 kubeadm.go:319] 
	I1124 03:37:32.984704  456828 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:37:32.984716  456828 kubeadm.go:319] 
	I1124 03:37:32.984916  456828 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ycw9qc.7i65x4n1zr1z1k2d \
	I1124 03:37:32.985043  456828 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:37:32.991944  456828 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:37:32.992068  456828 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:37:32.992094  456828 cni.go:84] Creating CNI manager for ""
	I1124 03:37:32.992102  456828 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:37:32.995225  456828 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:37:32.998096  456828 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:37:33.004093  456828 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 03:37:33.004119  456828 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:37:33.036441  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:37:34.155879  456828 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.119390679s)
	I1124 03:37:34.155921  456828 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:37:34.156043  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-098965 minikube.k8s.io/updated_at=2025_11_24T03_37_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=old-k8s-version-098965 minikube.k8s.io/primary=true
	I1124 03:37:34.156059  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:34.370014  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:34.370081  456828 ops.go:34] apiserver oom_adj: -16
	I1124 03:37:34.870621  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:35.370425  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:35.870744  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:36.370591  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:36.870102  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:37.370755  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:37.870481  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:38.370729  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:38.870716  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:39.370861  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:39.870112  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:40.370985  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:40.870131  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:41.370224  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:41.870910  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:42.370129  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:42.870708  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:43.370299  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:43.870132  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:44.370373  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:44.870148  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:45.370873  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:45.870208  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:46.370930  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:46.870103  456828 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:37:46.965988  456828 kubeadm.go:1114] duration metric: took 12.810009639s to wait for elevateKubeSystemPrivileges
	I1124 03:37:46.966014  456828 kubeadm.go:403] duration metric: took 32.902577839s to StartCluster
	I1124 03:37:46.966033  456828 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:46.966096  456828 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:37:46.967091  456828 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:37:46.967316  456828 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:37:46.967431  456828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:37:46.967681  456828 config.go:182] Loaded profile config "old-k8s-version-098965": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1124 03:37:46.967714  456828 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:37:46.967774  456828 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-098965"
	I1124 03:37:46.967788  456828 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-098965"
	I1124 03:37:46.967809  456828 host.go:66] Checking if "old-k8s-version-098965" exists ...
	I1124 03:37:46.968568  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:46.969265  456828 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-098965"
	I1124 03:37:46.969293  456828 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-098965"
	I1124 03:37:46.969620  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:46.976026  456828 out.go:179] * Verifying Kubernetes components...
	I1124 03:37:46.980332  456828 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:37:47.005475  456828 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-098965"
	I1124 03:37:47.005521  456828 host.go:66] Checking if "old-k8s-version-098965" exists ...
	I1124 03:37:47.006016  456828 cli_runner.go:164] Run: docker container inspect old-k8s-version-098965 --format={{.State.Status}}
	I1124 03:37:47.021223  456828 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:37:47.025797  456828 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:37:47.025822  456828 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:37:47.025899  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:47.043575  456828 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:37:47.043596  456828 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:37:47.043662  456828 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-098965
	I1124 03:37:47.067444  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:47.085937  456828 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33418 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/old-k8s-version-098965/id_rsa Username:docker}
	I1124 03:37:47.279804  456828 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:37:47.286358  456828 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:37:47.448103  456828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:37:47.467412  456828 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:37:48.289350  456828 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.002895345s)
	I1124 03:37:48.290385  456828 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-098965" to be "Ready" ...
	I1124 03:37:48.311715  456828 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.031826007s)
	I1124 03:37:48.311750  456828 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:37:48.801783  456828 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.334276635s)
	I1124 03:37:48.805329  456828 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 03:37:48.808579  456828 addons.go:530] duration metric: took 1.840852722s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1124 03:37:48.816214  456828 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-098965" context rescaled to 1 replicas
	W1124 03:37:50.294251  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	W1124 03:37:52.793560  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	W1124 03:37:54.793903  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	W1124 03:37:56.794298  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	W1124 03:37:59.293536  456828 node_ready.go:57] node "old-k8s-version-098965" has "Ready":"False" status (will retry)
	I1124 03:38:00.344278  456828 node_ready.go:49] node "old-k8s-version-098965" is "Ready"
	I1124 03:38:00.344383  456828 node_ready.go:38] duration metric: took 12.053923317s for node "old-k8s-version-098965" to be "Ready" ...
	I1124 03:38:00.344417  456828 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:38:00.344536  456828 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:38:00.402952  456828 api_server.go:72] duration metric: took 13.435606359s to wait for apiserver process to appear ...
	I1124 03:38:00.402979  456828 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:38:00.403000  456828 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:38:00.414315  456828 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:38:00.416457  456828 api_server.go:141] control plane version: v1.28.0
	I1124 03:38:00.416601  456828 api_server.go:131] duration metric: took 13.613451ms to wait for apiserver health ...
	I1124 03:38:00.416631  456828 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:38:00.425631  456828 system_pods.go:59] 8 kube-system pods found
	I1124 03:38:00.425724  456828 system_pods.go:61] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Pending
	I1124 03:38:00.425749  456828 system_pods.go:61] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:00.425783  456828 system_pods.go:61] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:00.425810  456828 system_pods.go:61] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:00.425830  456828 system_pods.go:61] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:00.425851  456828 system_pods.go:61] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:00.425879  456828 system_pods.go:61] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:00.425909  456828 system_pods.go:61] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:38:00.425943  456828 system_pods.go:74] duration metric: took 9.290401ms to wait for pod list to return data ...
	I1124 03:38:00.425969  456828 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:38:00.429247  456828 default_sa.go:45] found service account: "default"
	I1124 03:38:00.429360  456828 default_sa.go:55] duration metric: took 3.356866ms for default service account to be created ...
	I1124 03:38:00.429393  456828 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:38:00.435199  456828 system_pods.go:86] 8 kube-system pods found
	I1124 03:38:00.435313  456828 system_pods.go:89] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Pending
	I1124 03:38:00.435337  456828 system_pods.go:89] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:00.435376  456828 system_pods.go:89] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:00.435403  456828 system_pods.go:89] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:00.435426  456828 system_pods.go:89] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:00.435460  456828 system_pods.go:89] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:00.435482  456828 system_pods.go:89] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:00.435514  456828 system_pods.go:89] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:38:00.435576  456828 retry.go:31] will retry after 251.537949ms: missing components: kube-dns
	I1124 03:38:00.691897  456828 system_pods.go:86] 8 kube-system pods found
	I1124 03:38:00.691936  456828 system_pods.go:89] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:38:00.691943  456828 system_pods.go:89] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:00.691949  456828 system_pods.go:89] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:00.691954  456828 system_pods.go:89] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:00.691959  456828 system_pods.go:89] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:00.691968  456828 system_pods.go:89] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:00.691976  456828 system_pods.go:89] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:00.691981  456828 system_pods.go:89] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:38:00.691999  456828 retry.go:31] will retry after 269.359214ms: missing components: kube-dns
	I1124 03:38:00.970909  456828 system_pods.go:86] 8 kube-system pods found
	I1124 03:38:00.970944  456828 system_pods.go:89] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:38:00.970951  456828 system_pods.go:89] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:00.970957  456828 system_pods.go:89] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:00.970961  456828 system_pods.go:89] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:00.970966  456828 system_pods.go:89] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:00.970969  456828 system_pods.go:89] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:00.970973  456828 system_pods.go:89] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:00.970978  456828 system_pods.go:89] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:38:00.970996  456828 retry.go:31] will retry after 426.462867ms: missing components: kube-dns
	I1124 03:38:01.403286  456828 system_pods.go:86] 8 kube-system pods found
	I1124 03:38:01.403315  456828 system_pods.go:89] "coredns-5dd5756b68-2kmf2" [9c6642fb-17b7-4199-b927-eb63b9a58260] Running
	I1124 03:38:01.403322  456828 system_pods.go:89] "etcd-old-k8s-version-098965" [994c486f-9839-4407-bc6d-d7c52c9dcfe7] Running
	I1124 03:38:01.403330  456828 system_pods.go:89] "kindnet-mctv9" [0f0d91cd-7d64-482e-b33c-383b20f5bd79] Running
	I1124 03:38:01.403335  456828 system_pods.go:89] "kube-apiserver-old-k8s-version-098965" [777b36fe-0c46-4427-90b9-ef48ae1cc287] Running
	I1124 03:38:01.403341  456828 system_pods.go:89] "kube-controller-manager-old-k8s-version-098965" [3be22a1a-db9f-446f-9b0a-e61ce5482e12] Running
	I1124 03:38:01.403345  456828 system_pods.go:89] "kube-proxy-5t7nq" [6050bdb0-6390-48c7-863f-520ef6277ad8] Running
	I1124 03:38:01.403349  456828 system_pods.go:89] "kube-scheduler-old-k8s-version-098965" [ff509e4b-4fde-4ea0-8261-5f4463c5be01] Running
	I1124 03:38:01.403353  456828 system_pods.go:89] "storage-provisioner" [9ede1da5-704c-4aab-93e0-77ce93158129] Running
	I1124 03:38:01.403362  456828 system_pods.go:126] duration metric: took 973.897592ms to wait for k8s-apps to be running ...
	I1124 03:38:01.403373  456828 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:38:01.403427  456828 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:38:01.421738  456828 system_svc.go:56] duration metric: took 18.35448ms WaitForService to wait for kubelet
	I1124 03:38:01.421765  456828 kubeadm.go:587] duration metric: took 14.454425317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:38:01.421786  456828 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:38:01.425010  456828 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:38:01.425044  456828 node_conditions.go:123] node cpu capacity is 2
	I1124 03:38:01.425059  456828 node_conditions.go:105] duration metric: took 3.267233ms to run NodePressure ...
	I1124 03:38:01.425099  456828 start.go:242] waiting for startup goroutines ...
	I1124 03:38:01.425108  456828 start.go:247] waiting for cluster config update ...
	I1124 03:38:01.425124  456828 start.go:256] writing updated cluster config ...
	I1124 03:38:01.425448  456828 ssh_runner.go:195] Run: rm -f paused
	I1124 03:38:01.429212  456828 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:38:01.435249  456828 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-2kmf2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.447218  456828 pod_ready.go:94] pod "coredns-5dd5756b68-2kmf2" is "Ready"
	I1124 03:38:01.447254  456828 pod_ready.go:86] duration metric: took 11.97007ms for pod "coredns-5dd5756b68-2kmf2" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.452465  456828 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.459099  456828 pod_ready.go:94] pod "etcd-old-k8s-version-098965" is "Ready"
	I1124 03:38:01.459128  456828 pod_ready.go:86] duration metric: took 6.576599ms for pod "etcd-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.471032  456828 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.476662  456828 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-098965" is "Ready"
	I1124 03:38:01.476688  456828 pod_ready.go:86] duration metric: took 5.56861ms for pod "kube-apiserver-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.480096  456828 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:01.833649  456828 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-098965" is "Ready"
	I1124 03:38:01.833715  456828 pod_ready.go:86] duration metric: took 353.588012ms for pod "kube-controller-manager-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:02.035335  456828 pod_ready.go:83] waiting for pod "kube-proxy-5t7nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:02.433941  456828 pod_ready.go:94] pod "kube-proxy-5t7nq" is "Ready"
	I1124 03:38:02.433973  456828 pod_ready.go:86] duration metric: took 398.560828ms for pod "kube-proxy-5t7nq" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:02.633735  456828 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:03.033530  456828 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-098965" is "Ready"
	I1124 03:38:03.033561  456828 pod_ready.go:86] duration metric: took 399.801466ms for pod "kube-scheduler-old-k8s-version-098965" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:38:03.033575  456828 pod_ready.go:40] duration metric: took 1.604321281s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:38:03.103182  456828 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1124 03:38:03.106579  456828 out.go:203] 
	W1124 03:38:03.109581  456828 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 03:38:03.112629  456828 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 03:38:03.116685  456828 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-098965" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	2473c87591ead       1611cd07b61d5       8 seconds ago       Running             busybox                   0                   0780548608168       busybox                                          default
	32ab776c7affb       ba04bb24b9575       13 seconds ago      Running             storage-provisioner       0                   ddcd48a171630       storage-provisioner                              kube-system
	28a52e8d1e9e4       97e04611ad434       13 seconds ago      Running             coredns                   0                   aa01d3a3f7cba       coredns-5dd5756b68-2kmf2                         kube-system
	37f20e76ffbc2       b1a8c6f707935       25 seconds ago      Running             kindnet-cni               0                   2a6bd814ac01e       kindnet-mctv9                                    kube-system
	4baa8c107b38c       940f54a5bcae9       27 seconds ago      Running             kube-proxy                0                   b85e6b6d514cc       kube-proxy-5t7nq                                 kube-system
	8fb25b361e023       9cdd6470f48c8       49 seconds ago      Running             etcd                      0                   b669262c23763       etcd-old-k8s-version-098965                      kube-system
	666ad3b5bbcc5       00543d2fe5d71       49 seconds ago      Running             kube-apiserver            0                   9edcf3c3e4d9e       kube-apiserver-old-k8s-version-098965            kube-system
	95905c97af2e4       762dce4090c5f       49 seconds ago      Running             kube-scheduler            0                   d6f0d280dee01       kube-scheduler-old-k8s-version-098965            kube-system
	94d7bde87dab5       46cc66ccc7c19       49 seconds ago      Running             kube-controller-manager   0                   8eb2c9f965876       kube-controller-manager-old-k8s-version-098965   kube-system
	
	
	==> containerd <==
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.893826021Z" level=info msg="Container 32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.901529017Z" level=info msg="StartContainer for \"28a52e8d1e9e4c99322bf7f4a542d09e22eed502ede9105bfd3867fff8b743ae\""
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.902763625Z" level=info msg="connecting to shim 28a52e8d1e9e4c99322bf7f4a542d09e22eed502ede9105bfd3867fff8b743ae" address="unix:///run/containerd/s/70d70892534976c42f017b6a57c07c5f882e60cfc509cf351b04e5c63883f9c6" protocol=ttrpc version=3
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.923930845Z" level=info msg="CreateContainer within sandbox \"ddcd48a171630d558701e23e8b84d43ca3b433b204586da5fd73071e2c73cf02\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388\""
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.927472048Z" level=info msg="StartContainer for \"32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388\""
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.929854567Z" level=info msg="connecting to shim 32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388" address="unix:///run/containerd/s/5ddd01c5f051ac256aede9694ac052a9c600e13f3e3f44d833556ac361f844c9" protocol=ttrpc version=3
	Nov 24 03:38:00 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:00.986416771Z" level=info msg="StartContainer for \"28a52e8d1e9e4c99322bf7f4a542d09e22eed502ede9105bfd3867fff8b743ae\" returns successfully"
	Nov 24 03:38:01 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:01.032852607Z" level=info msg="StartContainer for \"32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388\" returns successfully"
	Nov 24 03:38:03 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:03.633430300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b377806c-ae20-44d2-9d0f-07b097026328,Namespace:default,Attempt:0,}"
	Nov 24 03:38:03 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:03.690035484Z" level=info msg="connecting to shim 07805486081686e75b51f404a8d192120c8e44f1df35435e82a18cd840b250a6" address="unix:///run/containerd/s/62c9570c9e36a3dfb4b0454e8ff44f8873d73aec0247dc7c06a4c63bdd606e84" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:38:03 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:03.757467931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b377806c-ae20-44d2-9d0f-07b097026328,Namespace:default,Attempt:0,} returns sandbox id \"07805486081686e75b51f404a8d192120c8e44f1df35435e82a18cd840b250a6\""
	Nov 24 03:38:03 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:03.759222049Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.797829117Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.799579082Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937183"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.802353497Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.809134257Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.810394309Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.051134434s"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.810432586Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.813949961Z" level=info msg="CreateContainer within sandbox \"07805486081686e75b51f404a8d192120c8e44f1df35435e82a18cd840b250a6\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.827500676Z" level=info msg="Container 2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.839960013Z" level=info msg="CreateContainer within sandbox \"07805486081686e75b51f404a8d192120c8e44f1df35435e82a18cd840b250a6\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968\""
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.841216955Z" level=info msg="StartContainer for \"2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968\""
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.842449240Z" level=info msg="connecting to shim 2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968" address="unix:///run/containerd/s/62c9570c9e36a3dfb4b0454e8ff44f8873d73aec0247dc7c06a4c63bdd606e84" protocol=ttrpc version=3
	Nov 24 03:38:05 old-k8s-version-098965 containerd[757]: time="2025-11-24T03:38:05.911309396Z" level=info msg="StartContainer for \"2473c87591ead98e23e27a6582c8fc6bfb2afc235a7786ab166b053a67742968\" returns successfully"
	Nov 24 03:38:11 old-k8s-version-098965 containerd[757]: E1124 03:38:11.465754     757 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [28a52e8d1e9e4c99322bf7f4a542d09e22eed502ede9105bfd3867fff8b743ae] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36019 - 46965 "HINFO IN 101273306430571101.3418018538030985896. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.022963225s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-098965
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-098965
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=old-k8s-version-098965
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_37_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:37:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-098965
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:38:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:38:03 +0000   Mon, 24 Nov 2025 03:37:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:38:03 +0000   Mon, 24 Nov 2025 03:37:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:38:03 +0000   Mon, 24 Nov 2025 03:37:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:38:03 +0000   Mon, 24 Nov 2025 03:38:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-098965
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                016e6bb7-0740-4efc-ad46-1814703763df
	  Boot ID:                    63a8a852-1462-44b1-9d6f-f77d26e8568f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-2kmf2                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-098965                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-mctv9                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-098965             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-098965    200m (10%)    0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 kube-proxy-5t7nq                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-098965             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-098965 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-098965 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x7 over 50s)  kubelet          Node old-k8s-version-098965 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  50s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-098965 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-098965 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-098965 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-098965 event: Registered Node old-k8s-version-098965 in Controller
	  Normal  NodeReady                14s                kubelet          Node old-k8s-version-098965 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:27] overlayfs: idmapped layers are currently not supported
	[Nov24 02:28] overlayfs: idmapped layers are currently not supported
	[Nov24 02:30] overlayfs: idmapped layers are currently not supported
	[  +9.824160] overlayfs: idmapped layers are currently not supported
	[Nov24 02:31] overlayfs: idmapped layers are currently not supported
	[Nov24 02:32] overlayfs: idmapped layers are currently not supported
	[ +27.981383] overlayfs: idmapped layers are currently not supported
	[Nov24 02:33] overlayfs: idmapped layers are currently not supported
	[Nov24 02:34] overlayfs: idmapped layers are currently not supported
	[Nov24 02:35] overlayfs: idmapped layers are currently not supported
	[Nov24 02:36] overlayfs: idmapped layers are currently not supported
	[Nov24 02:37] overlayfs: idmapped layers are currently not supported
	[Nov24 02:38] overlayfs: idmapped layers are currently not supported
	[Nov24 02:39] overlayfs: idmapped layers are currently not supported
	[ +24.837346] overlayfs: idmapped layers are currently not supported
	[Nov24 02:40] overlayfs: idmapped layers are currently not supported
	[ +40.823948] overlayfs: idmapped layers are currently not supported
	[  +1.705989] overlayfs: idmapped layers are currently not supported
	[Nov24 02:42] overlayfs: idmapped layers are currently not supported
	[ +21.661904] overlayfs: idmapped layers are currently not supported
	[Nov24 02:44] overlayfs: idmapped layers are currently not supported
	[  +1.074777] overlayfs: idmapped layers are currently not supported
	[Nov24 02:46] overlayfs: idmapped layers are currently not supported
	[ +19.120392] overlayfs: idmapped layers are currently not supported
	[Nov24 02:48] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [8fb25b361e0239913db0778bdfb64d93fee6d1a16be3fd7f4f316e46a892bbde] <==
	{"level":"info","ts":"2025-11-24T03:37:25.43693Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-24T03:37:25.437104Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-24T03:37:25.441421Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T03:37:25.44164Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T03:37:25.441821Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-24T03:37:25.445077Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T03:37:25.445165Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T03:37:25.495636Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-24T03:37:25.495889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-24T03:37:25.496002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-24T03:37:25.496143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-24T03:37:25.496408Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T03:37:25.497335Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-24T03:37:25.497479Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-24T03:37:25.498979Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-098965 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T03:37:25.499256Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:37:25.50078Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-24T03:37:25.500939Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T03:37:25.501272Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:37:25.503825Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T03:37:25.502905Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T03:37:25.504027Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T03:37:25.506444Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:37:25.506667Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T03:37:25.506735Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 03:38:15 up  2:20,  0 user,  load average: 2.55, 3.20, 2.77
	Linux old-k8s-version-098965 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [37f20e76ffbc24c2b929d70181ec4667f979dd10e9528ae0a376dca755a608bd] <==
	I1124 03:37:49.827895       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:37:49.828142       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:37:49.828290       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:37:49.828302       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:37:49.828312       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:37:50Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:37:50.033133       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:37:50.033241       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:37:50.033288       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:37:50.034681       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:37:50.324571       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:37:50.324658       1 metrics.go:72] Registering metrics
	I1124 03:37:50.324749       1 controller.go:711] "Syncing nftables rules"
	I1124 03:38:00.040225       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:38:00.040277       1 main.go:301] handling current node
	I1124 03:38:10.032819       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:38:10.033055       1 main.go:301] handling current node
	
	
	==> kube-apiserver [666ad3b5bbcc57cef3344095ab7c6a95424fcdae77e237214b172a62b87abb2e] <==
	I1124 03:37:29.817316       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 03:37:29.821018       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1124 03:37:29.821061       1 aggregator.go:166] initial CRD sync complete...
	I1124 03:37:29.821069       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 03:37:29.821233       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 03:37:29.821307       1 cache.go:39] Caches are synced for autoregister controller
	I1124 03:37:29.822681       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 03:37:29.854244       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:37:29.879343       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 03:37:29.891373       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1124 03:37:30.501665       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:37:30.515842       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:37:30.515870       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:37:31.168083       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:37:31.220692       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:37:31.327576       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:37:31.335539       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:37:31.336837       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 03:37:31.342035       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:37:31.795879       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 03:37:32.895003       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 03:37:32.910864       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:37:32.928122       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 03:37:45.687691       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 03:37:46.683285       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [94d7bde87dab52f8ec3b1763043f2afa14f31bf91ba4ddd110aa3c091eb1f236] <==
	I1124 03:37:45.830060       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I1124 03:37:45.830629       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="old-k8s-version-098965"
	I1124 03:37:45.832023       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1124 03:37:45.830221       1 event.go:307] "Event occurred" object="old-k8s-version-098965" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-098965 event: Registered Node old-k8s-version-098965 in Controller"
	I1124 03:37:46.234151       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:37:46.277193       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 03:37:46.277382       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 03:37:46.504268       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xqjm9"
	I1124 03:37:46.531473       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-2kmf2"
	I1124 03:37:46.546744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="854.357749ms"
	I1124 03:37:46.566310       1 event.go:307] "Event occurred" object="kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToCreateEndpoint" message="Failed to create endpoint for service kube-system/kube-dns: endpoints \"kube-dns\" already exists"
	I1124 03:37:46.584884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="38.082715ms"
	I1124 03:37:46.585113       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="149.391µs"
	I1124 03:37:46.696833       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-5t7nq"
	I1124 03:37:46.703751       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-mctv9"
	I1124 03:37:48.352432       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 03:37:48.387265       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-xqjm9"
	I1124 03:37:48.403262       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.894657ms"
	I1124 03:37:48.414134       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.824275ms"
	I1124 03:37:48.414238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.528µs"
	I1124 03:38:00.391250       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="131.267µs"
	I1124 03:38:00.449093       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.589µs"
	I1124 03:38:00.836415       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1124 03:38:01.292788       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="13.437073ms"
	I1124 03:38:01.294027       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.155904ms"
	
	
	==> kube-proxy [4baa8c107b38cc2761e31cd050e33ec89802d4aa44bd4f1d1d031950a9d835ec] <==
	I1124 03:37:47.752353       1 server_others.go:69] "Using iptables proxy"
	I1124 03:37:47.775066       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1124 03:37:47.844709       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:37:47.849188       1 server_others.go:152] "Using iptables Proxier"
	I1124 03:37:47.849234       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 03:37:47.849286       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 03:37:47.849319       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 03:37:47.849526       1 server.go:846] "Version info" version="v1.28.0"
	I1124 03:37:47.849543       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:37:47.851283       1 config.go:188] "Starting service config controller"
	I1124 03:37:47.851308       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 03:37:47.851328       1 config.go:97] "Starting endpoint slice config controller"
	I1124 03:37:47.851333       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 03:37:47.851909       1 config.go:315] "Starting node config controller"
	I1124 03:37:47.851919       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 03:37:47.952223       1 shared_informer.go:318] Caches are synced for node config
	I1124 03:37:47.952255       1 shared_informer.go:318] Caches are synced for service config
	I1124 03:37:47.952281       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [95905c97af2e4e393feeaef2edf3e1c7c5fc6dcb11cccf3554a17255c56bd15d] <==
	W1124 03:37:29.836095       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 03:37:29.836123       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 03:37:30.637685       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1124 03:37:30.637945       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1124 03:37:30.642715       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 03:37:30.642753       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1124 03:37:30.708776       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1124 03:37:30.708817       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1124 03:37:30.711532       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 03:37:30.711569       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 03:37:30.717417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1124 03:37:30.717460       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1124 03:37:30.738383       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1124 03:37:30.738423       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1124 03:37:30.770745       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 03:37:30.770991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 03:37:30.836552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 03:37:30.836594       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 03:37:30.842629       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1124 03:37:30.842859       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1124 03:37:30.843777       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 03:37:30.843981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 03:37:30.921894       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1124 03:37:30.922102       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1124 03:37:33.702680       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 03:37:45 old-k8s-version-098965 kubelet[1540]: I1124 03:37:45.681266    1540 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.713561    1540 topology_manager.go:215] "Topology Admit Handler" podUID="6050bdb0-6390-48c7-863f-520ef6277ad8" podNamespace="kube-system" podName="kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.716233    1540 topology_manager.go:215] "Topology Admit Handler" podUID="0f0d91cd-7d64-482e-b33c-383b20f5bd79" podNamespace="kube-system" podName="kindnet-mctv9"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.767542    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0f0d91cd-7d64-482e-b33c-383b20f5bd79-cni-cfg\") pod \"kindnet-mctv9\" (UID: \"0f0d91cd-7d64-482e-b33c-383b20f5bd79\") " pod="kube-system/kindnet-mctv9"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.767756    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f0d91cd-7d64-482e-b33c-383b20f5bd79-xtables-lock\") pod \"kindnet-mctv9\" (UID: \"0f0d91cd-7d64-482e-b33c-383b20f5bd79\") " pod="kube-system/kindnet-mctv9"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.767863    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgdbr\" (UniqueName: \"kubernetes.io/projected/0f0d91cd-7d64-482e-b33c-383b20f5bd79-kube-api-access-tgdbr\") pod \"kindnet-mctv9\" (UID: \"0f0d91cd-7d64-482e-b33c-383b20f5bd79\") " pod="kube-system/kindnet-mctv9"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.767964    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6050bdb0-6390-48c7-863f-520ef6277ad8-xtables-lock\") pod \"kube-proxy-5t7nq\" (UID: \"6050bdb0-6390-48c7-863f-520ef6277ad8\") " pod="kube-system/kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.768057    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6050bdb0-6390-48c7-863f-520ef6277ad8-lib-modules\") pod \"kube-proxy-5t7nq\" (UID: \"6050bdb0-6390-48c7-863f-520ef6277ad8\") " pod="kube-system/kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.768153    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnmtw\" (UniqueName: \"kubernetes.io/projected/6050bdb0-6390-48c7-863f-520ef6277ad8-kube-api-access-dnmtw\") pod \"kube-proxy-5t7nq\" (UID: \"6050bdb0-6390-48c7-863f-520ef6277ad8\") " pod="kube-system/kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.768259    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6050bdb0-6390-48c7-863f-520ef6277ad8-kube-proxy\") pod \"kube-proxy-5t7nq\" (UID: \"6050bdb0-6390-48c7-863f-520ef6277ad8\") " pod="kube-system/kube-proxy-5t7nq"
	Nov 24 03:37:46 old-k8s-version-098965 kubelet[1540]: I1124 03:37:46.768359    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f0d91cd-7d64-482e-b33c-383b20f5bd79-lib-modules\") pod \"kindnet-mctv9\" (UID: \"0f0d91cd-7d64-482e-b33c-383b20f5bd79\") " pod="kube-system/kindnet-mctv9"
	Nov 24 03:37:50 old-k8s-version-098965 kubelet[1540]: I1124 03:37:50.218063    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5t7nq" podStartSLOduration=4.2180200469999996 podCreationTimestamp="2025-11-24 03:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:37:48.208401476 +0000 UTC m=+15.367432854" watchObservedRunningTime="2025-11-24 03:37:50.218020047 +0000 UTC m=+17.377051399"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.190884    1540 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.379008    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-mctv9" podStartSLOduration=12.155727494 podCreationTimestamp="2025-11-24 03:37:46 +0000 UTC" firstStartedPulling="2025-11-24 03:37:47.330956178 +0000 UTC m=+14.489987539" lastFinishedPulling="2025-11-24 03:37:49.554182857 +0000 UTC m=+16.713214218" observedRunningTime="2025-11-24 03:37:50.219024146 +0000 UTC m=+17.378055507" watchObservedRunningTime="2025-11-24 03:38:00.378954173 +0000 UTC m=+27.537985543"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.379210    1540 topology_manager.go:215] "Topology Admit Handler" podUID="9ede1da5-704c-4aab-93e0-77ce93158129" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.394275    1540 topology_manager.go:215] "Topology Admit Handler" podUID="9c6642fb-17b7-4199-b927-eb63b9a58260" podNamespace="kube-system" podName="coredns-5dd5756b68-2kmf2"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.504386    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgg48\" (UniqueName: \"kubernetes.io/projected/9c6642fb-17b7-4199-b927-eb63b9a58260-kube-api-access-fgg48\") pod \"coredns-5dd5756b68-2kmf2\" (UID: \"9c6642fb-17b7-4199-b927-eb63b9a58260\") " pod="kube-system/coredns-5dd5756b68-2kmf2"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.504451    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c6642fb-17b7-4199-b927-eb63b9a58260-config-volume\") pod \"coredns-5dd5756b68-2kmf2\" (UID: \"9c6642fb-17b7-4199-b927-eb63b9a58260\") " pod="kube-system/coredns-5dd5756b68-2kmf2"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.504532    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8snrh\" (UniqueName: \"kubernetes.io/projected/9ede1da5-704c-4aab-93e0-77ce93158129-kube-api-access-8snrh\") pod \"storage-provisioner\" (UID: \"9ede1da5-704c-4aab-93e0-77ce93158129\") " pod="kube-system/storage-provisioner"
	Nov 24 03:38:00 old-k8s-version-098965 kubelet[1540]: I1124 03:38:00.504567    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/9ede1da5-704c-4aab-93e0-77ce93158129-tmp\") pod \"storage-provisioner\" (UID: \"9ede1da5-704c-4aab-93e0-77ce93158129\") " pod="kube-system/storage-provisioner"
	Nov 24 03:38:01 old-k8s-version-098965 kubelet[1540]: I1124 03:38:01.277737    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.277693839 podCreationTimestamp="2025-11-24 03:37:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:38:01.256250253 +0000 UTC m=+28.415281605" watchObservedRunningTime="2025-11-24 03:38:01.277693839 +0000 UTC m=+28.436725192"
	Nov 24 03:38:03 old-k8s-version-098965 kubelet[1540]: I1124 03:38:03.329633    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-2kmf2" podStartSLOduration=17.329588381 podCreationTimestamp="2025-11-24 03:37:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:38:01.279992944 +0000 UTC m=+28.439024297" watchObservedRunningTime="2025-11-24 03:38:03.329588381 +0000 UTC m=+30.488619734"
	Nov 24 03:38:03 old-k8s-version-098965 kubelet[1540]: I1124 03:38:03.329845    1540 topology_manager.go:215] "Topology Admit Handler" podUID="b377806c-ae20-44d2-9d0f-07b097026328" podNamespace="default" podName="busybox"
	Nov 24 03:38:03 old-k8s-version-098965 kubelet[1540]: I1124 03:38:03.426801    1540 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn2qh\" (UniqueName: \"kubernetes.io/projected/b377806c-ae20-44d2-9d0f-07b097026328-kube-api-access-wn2qh\") pod \"busybox\" (UID: \"b377806c-ae20-44d2-9d0f-07b097026328\") " pod="default/busybox"
	Nov 24 03:38:06 old-k8s-version-098965 kubelet[1540]: I1124 03:38:06.274643    1540 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.222747006 podCreationTimestamp="2025-11-24 03:38:03 +0000 UTC" firstStartedPulling="2025-11-24 03:38:03.75886784 +0000 UTC m=+30.917899193" lastFinishedPulling="2025-11-24 03:38:05.810715943 +0000 UTC m=+32.969747296" observedRunningTime="2025-11-24 03:38:06.27449371 +0000 UTC m=+33.433525071" watchObservedRunningTime="2025-11-24 03:38:06.274595109 +0000 UTC m=+33.433626495"
	
	
	==> storage-provisioner [32ab776c7affb85bd5965dee0104d1470d0553d2b7a80e479ea0fc030ea67388] <==
	I1124 03:38:01.039603       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:38:01.054106       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:38:01.054328       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 03:38:01.064918       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:38:01.065095       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-098965_e3e8caf0-85bd-4d0b-af08-80a33b7d616e!
	I1124 03:38:01.066102       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c304fee8-eb73-4695-8997-27ec70001b31", APIVersion:"v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-098965_e3e8caf0-85bd-4d0b-af08-80a33b7d616e became leader
	I1124 03:38:01.165252       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-098965_e3e8caf0-85bd-4d0b-af08-80a33b7d616e!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098965 -n old-k8s-version-098965
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-098965 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (12.85s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (12.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-262280 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [820858e8-9815-41a7-a6c3-43bbfe947f4b] Pending
helpers_test.go:352: "busybox" [820858e8-9815-41a7-a6c3-43bbfe947f4b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [820858e8-9815-41a7-a6c3-43bbfe947f4b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00312348s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-262280 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-262280
helpers_test.go:243: (dbg) docker inspect no-preload-262280:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43",
	        "Created": "2025-11-24T03:39:39.125759588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 465758,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:39:39.222912836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43/hostname",
	        "HostsPath": "/var/lib/docker/containers/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43/hosts",
	        "LogPath": "/var/lib/docker/containers/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43-json.log",
	        "Name": "/no-preload-262280",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-262280:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-262280",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43",
	                "LowerDir": "/var/lib/docker/overlay2/1a690ac398d6ea4279990c525ce2b1ce9b0be841ce796f32faa57c71d3bcc7c7-init/diff:/var/lib/docker/overlay2/11b197f530f0d571f61892814d8d4c774f7d3e5a97abdd8c5aa182cc99b2d856/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a690ac398d6ea4279990c525ce2b1ce9b0be841ce796f32faa57c71d3bcc7c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a690ac398d6ea4279990c525ce2b1ce9b0be841ce796f32faa57c71d3bcc7c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a690ac398d6ea4279990c525ce2b1ce9b0be841ce796f32faa57c71d3bcc7c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-262280",
	                "Source": "/var/lib/docker/volumes/no-preload-262280/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-262280",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-262280",
	                "name.minikube.sigs.k8s.io": "no-preload-262280",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff86b97e9af3f76433d01ded73b3a20157bafebde6fceb4cf4f1ef2d072b94c8",
	            "SandboxKey": "/var/run/docker/netns/ff86b97e9af3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-262280": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:f1:9d:47:19:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c7a8069005652453f62a34c2e34a46a4f9e1a107e7ecc865b5e42d1b2ca7588f",
	                    "EndpointID": "2c04a89f2f320f71228617091bb8d81d0ca59d5a2ae2905b6fa3b657d1ab9b55",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-262280",
	                        "35fb5533c8b0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-262280 -n no-preload-262280
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-262280 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-262280 logs -n 25: (1.368736532s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p force-systemd-env-574539 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:35 UTC │
	│ delete  │ -p kubernetes-upgrade-850960                                                                                                                                                                                                                        │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ force-systemd-env-574539 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p force-systemd-env-574539                                                                                                                                                                                                                         │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-options-216763 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ cert-options-216763 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ -p cert-options-216763 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p cert-options-216763                                                                                                                                                                                                                              │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-098965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ stop    │ -p old-k8s-version-098965 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-098965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ image   │ old-k8s-version-098965 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ pause   │ -p old-k8s-version-098965 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ unpause │ -p old-k8s-version-098965 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p old-k8s-version-098965                                                                                                                                                                                                                           │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p old-k8s-version-098965                                                                                                                                                                                                                           │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p no-preload-262280 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-262280         │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p cert-expiration-846384                                                                                                                                                                                                                           │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-818836        │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:39:54
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:39:54.770134  468607 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:39:54.770765  468607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:54.770803  468607 out.go:374] Setting ErrFile to fd 2...
	I1124 03:39:54.770823  468607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:54.771173  468607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:39:54.771694  468607 out.go:368] Setting JSON to false
	I1124 03:39:54.772710  468607 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8523,"bootTime":1763947072,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:39:54.772814  468607 start.go:143] virtualization:  
	I1124 03:39:54.776844  468607 out.go:179] * [embed-certs-818836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:39:54.781644  468607 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:39:54.781732  468607 notify.go:221] Checking for updates...
	I1124 03:39:54.787053  468607 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:39:54.790493  468607 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:39:54.793844  468607 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:39:54.797082  468607 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:39:54.800233  468607 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:39:54.803908  468607 config.go:182] Loaded profile config "no-preload-262280": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:39:54.804064  468607 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:39:54.846350  468607 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:39:54.846478  468607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:39:54.943233  468607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-24 03:39:54.932926558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:39:54.943335  468607 docker.go:319] overlay module found
	I1124 03:39:54.946509  468607 out.go:179] * Using the docker driver based on user configuration
	I1124 03:39:54.950114  468607 start.go:309] selected driver: docker
	I1124 03:39:54.950133  468607 start.go:927] validating driver "docker" against <nil>
	I1124 03:39:54.950147  468607 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:39:54.950879  468607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:39:55.051907  468607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-24 03:39:55.038363177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:39:55.052067  468607 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:39:55.052307  468607 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:39:55.055713  468607 out.go:179] * Using Docker driver with root privileges
	I1124 03:39:55.058665  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:39:55.058771  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:39:55.058786  468607 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:39:55.058875  468607 start.go:353] cluster config:
	{Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:39:55.062215  468607 out.go:179] * Starting "embed-certs-818836" primary control-plane node in "embed-certs-818836" cluster
	I1124 03:39:55.065106  468607 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:39:55.068109  468607 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:39:55.071078  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:39:55.071139  468607 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 03:39:55.071152  468607 cache.go:65] Caching tarball of preloaded images
	I1124 03:39:55.071260  468607 preload.go:238] Found /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 03:39:55.071275  468607 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:39:55.071398  468607 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json ...
	I1124 03:39:55.071424  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json: {Name:mk937c632daa818953aa058a3473ebcd37b1b74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:39:55.071593  468607 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:39:55.094186  468607 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:39:55.094210  468607 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:39:55.094227  468607 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:39:55.094258  468607 start.go:360] acquireMachinesLock for embed-certs-818836: {Name:mk5ce88de168b198a494858bb8201276136df5bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:39:55.094377  468607 start.go:364] duration metric: took 97.543µs to acquireMachinesLock for "embed-certs-818836"
	I1124 03:39:55.094417  468607 start.go:93] Provisioning new machine with config: &{Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:39:55.094497  468607 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:39:53.821541  465459 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.603191329s)
	I1124 03:39:53.821565  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 03:39:53.821584  465459 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:53.821636  465459 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:57.814796  465459 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.993137445s)
	I1124 03:39:57.814820  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:39:57.814838  465459 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:39:57.814894  465459 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:39:55.099888  468607 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:39:55.100165  468607 start.go:159] libmachine.API.Create for "embed-certs-818836" (driver="docker")
	I1124 03:39:55.100219  468607 client.go:173] LocalClient.Create starting
	I1124 03:39:55.100327  468607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem
	I1124 03:39:55.100376  468607 main.go:143] libmachine: Decoding PEM data...
	I1124 03:39:55.100396  468607 main.go:143] libmachine: Parsing certificate...
	I1124 03:39:55.100448  468607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem
	I1124 03:39:55.100500  468607 main.go:143] libmachine: Decoding PEM data...
	I1124 03:39:55.100517  468607 main.go:143] libmachine: Parsing certificate...
	I1124 03:39:55.100910  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:39:55.125795  468607 cli_runner.go:211] docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:39:55.125884  468607 network_create.go:284] running [docker network inspect embed-certs-818836] to gather additional debugging logs...
	I1124 03:39:55.125914  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836
	W1124 03:39:55.143227  468607 cli_runner.go:211] docker network inspect embed-certs-818836 returned with exit code 1
	I1124 03:39:55.143261  468607 network_create.go:287] error running [docker network inspect embed-certs-818836]: docker network inspect embed-certs-818836: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-818836 not found
	I1124 03:39:55.143275  468607 network_create.go:289] output of [docker network inspect embed-certs-818836]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-818836 not found
	
	** /stderr **
	I1124 03:39:55.143372  468607 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:39:55.161548  468607 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-752aaa40bb3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:00:20:e4:71:15} reservation:<nil>}
	I1124 03:39:55.161924  468607 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbb0dee281db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:ff:07:3e:91:0f} reservation:<nil>}
	I1124 03:39:55.162178  468607 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d95ffec60547 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:b5:f2:ed:07:1e} reservation:<nil>}
	I1124 03:39:55.162624  468607 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2c210}
	I1124 03:39:55.162647  468607 network_create.go:124] attempt to create docker network embed-certs-818836 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 03:39:55.162703  468607 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-818836 embed-certs-818836
	I1124 03:39:55.225512  468607 network_create.go:108] docker network embed-certs-818836 192.168.76.0/24 created
	I1124 03:39:55.225548  468607 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-818836" container
	I1124 03:39:55.225630  468607 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:39:55.242034  468607 cli_runner.go:164] Run: docker volume create embed-certs-818836 --label name.minikube.sigs.k8s.io=embed-certs-818836 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:39:55.262160  468607 oci.go:103] Successfully created a docker volume embed-certs-818836
	I1124 03:39:55.262245  468607 cli_runner.go:164] Run: docker run --rm --name embed-certs-818836-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-818836 --entrypoint /usr/bin/test -v embed-certs-818836:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:39:56.023650  468607 oci.go:107] Successfully prepared a docker volume embed-certs-818836
	I1124 03:39:56.023728  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:39:56.023743  468607 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:39:56.023811  468607 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-818836:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:39:58.487593  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:39:58.487627  465459 cache_images.go:125] Successfully loaded all cached images
	I1124 03:39:58.487632  465459 cache_images.go:94] duration metric: took 15.116520084s to LoadCachedImages
	I1124 03:39:58.487645  465459 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1124 03:39:58.487737  465459 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-262280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-262280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:39:58.487802  465459 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:39:58.517432  465459 cni.go:84] Creating CNI manager for ""
	I1124 03:39:58.517454  465459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:39:58.517467  465459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:39:58.517491  465459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-262280 NodeName:no-preload-262280 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:39:58.517604  465459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-262280"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:39:58.517675  465459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:39:58.527708  465459 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:39:58.527826  465459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:39:58.537240  465459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1124 03:39:58.537336  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:39:58.538133  465459 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1124 03:39:58.538622  465459 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1124 03:39:58.544156  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:39:58.544188  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1124 03:39:59.579840  465459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:39:59.602240  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:39:59.612666  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:39:59.612754  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1124 03:39:59.686847  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:39:59.706955  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:39:59.707011  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1124 03:40:00.747521  465459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:00.765344  465459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1124 03:40:00.782659  465459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:00.799074  465459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1124 03:40:00.815268  465459 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:00.821044  465459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:00.834962  465459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:00.961773  465459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:00.983622  465459 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280 for IP: 192.168.85.2
	I1124 03:40:00.983698  465459 certs.go:195] generating shared ca certs ...
	I1124 03:40:00.983731  465459 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:00.983948  465459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:40:00.984027  465459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:40:00.984066  465459 certs.go:257] generating profile certs ...
	I1124 03:40:00.984149  465459 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key
	I1124 03:40:00.984190  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt with IP's: []
	I1124 03:40:01.602129  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt ...
	I1124 03:40:01.602164  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: {Name:mk5c809e6dd128dc33970522909ae40ed13851c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:01.602404  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key ...
	I1124 03:40:01.602420  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key: {Name:mk4c99883f96920c3d389a999045dde9f43e74fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:01.602523  465459 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859
	I1124 03:40:01.602540  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:40:02.066816  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 ...
	I1124 03:40:02.066899  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859: {Name:mkd9f7b00f0b8be089cbce37f7826610732080e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.067142  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859 ...
	I1124 03:40:02.067186  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859: {Name:mkaaed6b4175e7a41645d8c3454f2c44a0203858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.067372  465459 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt
	I1124 03:40:02.067467  465459 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key
	I1124 03:40:02.067543  465459 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key
	I1124 03:40:02.067564  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt with IP's: []
	I1124 03:40:02.465004  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt ...
	I1124 03:40:02.465036  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt: {Name:mkf027bf4f367183ad961bb9001139254f6258cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.465206  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key ...
	I1124 03:40:02.465221  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key: {Name:mk8915392d44290b2ab552251edca0730df8ed0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.465611  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:40:02.465663  465459 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:02.465681  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:40:02.465712  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:02.465746  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:02.465775  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:02.465824  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:02.466427  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:02.490422  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:40:02.538618  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:02.580031  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:02.623593  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:40:02.657524  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:02.687220  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:02.710371  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:02.732274  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:40:02.755007  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:40:02.777653  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:02.805037  465459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:02.826328  465459 ssh_runner.go:195] Run: openssl version
	I1124 03:40:02.842808  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:40:02.861247  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.869101  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.869168  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.973780  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:02.983869  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:40:03.003344  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.014606  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.014678  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.100872  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:03.119219  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:03.132707  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.143890  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.143956  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.227580  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:03.241329  465459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:03.250558  465459 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:40:03.250662  465459 kubeadm.go:401] StartCluster: {Name:no-preload-262280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-262280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:03.250758  465459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:03.250841  465459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:03.389740  465459 cri.go:89] found id: ""
	I1124 03:40:03.389818  465459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:03.413175  465459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:03.434949  465459 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:40:03.435019  465459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:03.450572  465459 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:03.450591  465459 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:03.450643  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:03.481203  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:03.481293  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:03.505063  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:03.526828  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:03.526899  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:03.542273  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:03.554380  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:03.554459  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:03.565133  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:03.583655  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:03.583761  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:03.600101  465459 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:40:03.695740  465459 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:40:03.695802  465459 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:40:03.729178  465459 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:40:03.729476  465459 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:40:03.729518  465459 kubeadm.go:319] OS: Linux
	I1124 03:40:03.729563  465459 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:40:03.729611  465459 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:40:03.729658  465459 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:40:03.729710  465459 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:40:03.729759  465459 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:40:03.729806  465459 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:40:03.729851  465459 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:40:03.729911  465459 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:40:03.729958  465459 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:40:03.847775  465459 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:40:03.847886  465459 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:40:03.847977  465459 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:40:03.860909  465459 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:40:02.325904  468607 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-818836:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (6.302044362s)
	I1124 03:40:02.325939  468607 kic.go:203] duration metric: took 6.302193098s to extract preloaded images to volume ...
	W1124 03:40:02.326078  468607 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:40:02.326190  468607 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:40:02.445610  468607 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-818836 --name embed-certs-818836 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-818836 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-818836 --network embed-certs-818836 --ip 192.168.76.2 --volume embed-certs-818836:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:40:02.830161  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Running}}
	I1124 03:40:02.858743  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:02.883367  468607 cli_runner.go:164] Run: docker exec embed-certs-818836 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:40:02.940884  468607 oci.go:144] the created container "embed-certs-818836" has a running status.
	I1124 03:40:02.940913  468607 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa...
	I1124 03:40:03.398411  468607 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:40:03.429853  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:03.464067  468607 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:40:03.464088  468607 kic_runner.go:114] Args: [docker exec --privileged embed-certs-818836 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:40:03.540196  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:03.576062  468607 machine.go:94] provisionDockerMachine start ...
	I1124 03:40:03.576168  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:03.596498  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.597706  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:03.597742  468607 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:40:03.598783  468607 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 03:40:03.865701  465459 out.go:252]   - Generating certificates and keys ...
	I1124 03:40:03.865794  465459 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:40:03.865861  465459 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:40:04.261018  465459 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:40:04.423750  465459 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:40:04.784877  465459 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:40:05.469508  465459 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:40:05.670184  465459 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:40:05.670529  465459 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-262280] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:40:05.916276  465459 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:40:05.916671  465459 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-262280] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:40:06.295195  465459 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:40:06.703517  465459 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:40:07.221344  465459 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:40:07.221867  465459 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:40:06.756947  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-818836
	
	I1124 03:40:06.757024  468607 ubuntu.go:182] provisioning hostname "embed-certs-818836"
	I1124 03:40:06.757117  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:06.780855  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:06.781159  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:06.781170  468607 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-818836 && echo "embed-certs-818836" | sudo tee /etc/hostname
	I1124 03:40:06.952924  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-818836
	
	I1124 03:40:06.953068  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:06.976988  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:06.977313  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:06.977329  468607 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-818836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-818836/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-818836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:40:07.145464  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:40:07.145556  468607 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-255205/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-255205/.minikube}
	I1124 03:40:07.145614  468607 ubuntu.go:190] setting up certificates
	I1124 03:40:07.145642  468607 provision.go:84] configureAuth start
	I1124 03:40:07.145739  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.169212  468607 provision.go:143] copyHostCerts
	I1124 03:40:07.169290  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem, removing ...
	I1124 03:40:07.169299  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem
	I1124 03:40:07.169376  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem (1078 bytes)
	I1124 03:40:07.169475  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem, removing ...
	I1124 03:40:07.169480  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem
	I1124 03:40:07.169506  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem (1123 bytes)
	I1124 03:40:07.169572  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem, removing ...
	I1124 03:40:07.169578  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem
	I1124 03:40:07.169604  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem (1675 bytes)
	I1124 03:40:07.169661  468607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem org=jenkins.embed-certs-818836 san=[127.0.0.1 192.168.76.2 embed-certs-818836 localhost minikube]
	I1124 03:40:07.418050  468607 provision.go:177] copyRemoteCerts
	I1124 03:40:07.418164  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:40:07.418250  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.436857  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.541668  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:40:07.562105  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:40:07.582528  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:40:07.603626  468607 provision.go:87] duration metric: took 457.949417ms to configureAuth
	I1124 03:40:07.603697  468607 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:40:07.603915  468607 config.go:182] Loaded profile config "embed-certs-818836": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:07.603945  468607 machine.go:97] duration metric: took 4.027864554s to provisionDockerMachine
	I1124 03:40:07.603968  468607 client.go:176] duration metric: took 12.503739627s to LocalClient.Create
	I1124 03:40:07.603998  468607 start.go:167] duration metric: took 12.503833413s to libmachine.API.Create "embed-certs-818836"
	I1124 03:40:07.604072  468607 start.go:293] postStartSetup for "embed-certs-818836" (driver="docker")
	I1124 03:40:07.604107  468607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:40:07.604203  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:40:07.604265  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.632600  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.737983  468607 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:40:07.742314  468607 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:40:07.742341  468607 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:40:07.742353  468607 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/addons for local assets ...
	I1124 03:40:07.742407  468607 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/files for local assets ...
	I1124 03:40:07.742485  468607 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem -> 2570692.pem in /etc/ssl/certs
	I1124 03:40:07.742591  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:40:07.751254  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:07.775588  468607 start.go:296] duration metric: took 171.476748ms for postStartSetup
	I1124 03:40:07.776070  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.810247  468607 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json ...
	I1124 03:40:07.810536  468607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:40:07.810584  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.829698  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.934319  468607 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:40:07.940379  468607 start.go:128] duration metric: took 12.845864213s to createHost
	I1124 03:40:07.940407  468607 start.go:83] releasing machines lock for "embed-certs-818836", held for 12.84601335s
	I1124 03:40:07.940518  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.966549  468607 ssh_runner.go:195] Run: cat /version.json
	I1124 03:40:07.966614  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.966858  468607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:40:07.966916  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:08.009694  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:08.010496  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:08.140825  468607 ssh_runner.go:195] Run: systemctl --version
	I1124 03:40:08.236306  468607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:40:08.241952  468607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:40:08.242033  468607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:40:08.275925  468607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:40:08.276006  468607 start.go:496] detecting cgroup driver to use...
	I1124 03:40:08.276054  468607 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:40:08.276163  468607 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:40:08.293354  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:40:08.309121  468607 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:40:08.309273  468607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:40:08.329161  468607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:40:08.349309  468607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:40:08.512169  468607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:40:08.692876  468607 docker.go:234] disabling docker service ...
	I1124 03:40:08.692943  468607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:40:08.722865  468607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:40:08.738391  468607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:40:08.914395  468607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:40:09.078224  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:40:09.099626  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:40:09.127201  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:40:09.137475  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:40:09.151390  468607 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 03:40:09.151466  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 03:40:09.161530  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:40:09.179218  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:40:09.188732  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:40:09.198154  468607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:40:09.206565  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:40:09.215833  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:40:09.225156  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:40:09.234765  468607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:40:09.243300  468607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:40:09.251671  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:09.434190  468607 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:40:09.629101  468607 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:40:09.629177  468607 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:40:09.633574  468607 start.go:564] Will wait 60s for crictl version
	I1124 03:40:09.633686  468607 ssh_runner.go:195] Run: which crictl
	I1124 03:40:09.637799  468607 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:40:09.680020  468607 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:40:09.680112  468607 ssh_runner.go:195] Run: containerd --version
	I1124 03:40:09.701052  468607 ssh_runner.go:195] Run: containerd --version
	I1124 03:40:09.728551  468607 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:40:09.731602  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:40:09.752927  468607 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:40:09.757138  468607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:09.767237  468607 kubeadm.go:884] updating cluster {Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:40:09.767356  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:40:09.767434  468607 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:07.945073  465459 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:40:08.356082  465459 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:40:08.704960  465459 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:40:09.943963  465459 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:40:10.216943  465459 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:40:10.218580  465459 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:40:10.237543  465459 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:40:09.801793  468607 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:40:09.801818  468607 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:40:09.801887  468607 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:09.828434  468607 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:40:09.828460  468607 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:40:09.828491  468607 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 03:40:09.828596  468607 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-818836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:40:09.828666  468607 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:40:09.855719  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:40:09.855746  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:09.855754  468607 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:40:09.855777  468607 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-818836 NodeName:embed-certs-818836 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:40:09.855896  468607 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-818836"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:40:09.855970  468607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:40:09.864082  468607 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:40:09.864155  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:09.871799  468607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 03:40:09.885236  468607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:09.903151  468607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1124 03:40:09.916330  468607 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:09.920755  468607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:09.930245  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:10.095373  468607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:10.120719  468607 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836 for IP: 192.168.76.2
	I1124 03:40:10.120751  468607 certs.go:195] generating shared ca certs ...
	I1124 03:40:10.120775  468607 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.120926  468607 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:40:10.121022  468607 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:40:10.121036  468607 certs.go:257] generating profile certs ...
	I1124 03:40:10.121101  468607 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key
	I1124 03:40:10.121117  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt with IP's: []
	I1124 03:40:10.420574  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt ...
	I1124 03:40:10.420618  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt: {Name:mk242703eac12cbe34e4028bdd5925f7440b86e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.420945  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key ...
	I1124 03:40:10.420962  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key: {Name:mk4f7dbe6cf87f427019f2b9bb878908f82573e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.421164  468607 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253
	I1124 03:40:10.421185  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:40:10.579421  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 ...
	I1124 03:40:10.579459  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253: {Name:mk072dbea8dc92562bf332b98a65b57fa9581398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.579707  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253 ...
	I1124 03:40:10.579733  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253: {Name:mk3986530288979c5c9a2178817e35e45248f3c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.579920  468607 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt
	I1124 03:40:10.580110  468607 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key
	I1124 03:40:10.580235  468607 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key
	I1124 03:40:10.580282  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt with IP's: []
	I1124 03:40:10.650382  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt ...
	I1124 03:40:10.650422  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt: {Name:mk7002a63ade6dd6830536f0b45108488d8d2647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.650709  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key ...
	I1124 03:40:10.650730  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key: {Name:mk9ed88761ece5843396144a4fbfafba4af7e713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.651036  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:40:10.651117  468607 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:10.651134  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:40:10.651185  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:10.651246  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:10.651301  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:10.651375  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:10.652050  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:10.674232  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:40:10.698101  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:10.717381  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:10.737149  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:40:10.761648  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:10.786481  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:10.807220  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:10.827613  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:10.849625  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:40:10.870797  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:40:10.892331  468607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:10.908461  468607 ssh_runner.go:195] Run: openssl version
	I1124 03:40:10.916101  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:40:10.926608  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.931358  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.931455  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.976219  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:10.986375  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:10.996391  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.017389  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.017511  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.093548  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:11.109631  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:40:11.122383  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.127328  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.127425  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.171896  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:11.181990  468607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:11.186817  468607 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:40:11.186902  468607 kubeadm.go:401] StartCluster: {Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:11.187015  468607 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:11.187107  468607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:11.229657  468607 cri.go:89] found id: ""
	I1124 03:40:11.229767  468607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:11.239862  468607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:11.249588  468607 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:40:11.249708  468607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:11.261397  468607 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:11.261464  468607 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:11.261537  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:11.271489  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:11.271603  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:11.282245  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:11.295430  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:11.295544  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:11.303936  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:11.314965  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:11.315086  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:11.322532  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:11.331297  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:11.331410  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:11.339587  468607 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:40:11.388094  468607 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:40:11.388694  468607 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:40:11.418975  468607 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:40:11.419097  468607 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:40:11.419162  468607 kubeadm.go:319] OS: Linux
	I1124 03:40:11.419229  468607 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:40:11.419310  468607 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:40:11.419397  468607 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:40:11.419482  468607 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:40:11.419545  468607 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:40:11.419609  468607 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:40:11.419672  468607 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:40:11.419733  468607 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:40:11.419793  468607 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:40:11.498745  468607 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:40:11.498892  468607 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:40:11.499019  468607 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:40:11.505807  468607 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:40:10.241345  465459 out.go:252]   - Booting up control plane ...
	I1124 03:40:10.241455  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:40:10.245314  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:40:10.248607  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:40:10.281242  465459 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:40:10.281374  465459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:40:10.290260  465459 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:40:10.290359  465459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:40:10.290400  465459 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:40:10.449824  465459 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:40:10.450005  465459 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:40:11.952880  465459 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500833117s
	I1124 03:40:11.954116  465459 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:40:11.954483  465459 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:40:11.954823  465459 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:40:11.955791  465459 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:40:11.512278  468607 out.go:252]   - Generating certificates and keys ...
	I1124 03:40:11.512384  468607 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:40:11.512475  468607 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:40:12.156551  468607 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:40:12.440381  468607 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:40:13.054828  468607 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:40:14.412107  468607 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:40:17.439040  465459 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.482829056s
	I1124 03:40:14.824196  468607 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:40:14.824831  468607 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-818836 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:40:15.040863  468607 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:40:15.040998  468607 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-818836 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:40:15.376085  468607 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:40:15.719552  468607 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:40:16.788559  468607 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:40:16.789083  468607 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:40:17.179360  468607 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:40:17.589911  468607 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:40:18.716938  468607 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:40:19.434256  468607 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:40:19.598171  468607 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:40:19.599352  468607 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:40:19.612523  468607 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:40:19.615809  468607 out.go:252]   - Booting up control plane ...
	I1124 03:40:19.615923  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:40:19.616002  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:40:19.616070  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:40:19.643244  468607 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:40:19.643372  468607 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:40:19.651919  468607 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:40:19.660667  468607 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:40:19.661493  468607 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:40:20.959069  465459 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.003836426s
	I1124 03:40:22.125067  465459 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.16861254s
	I1124 03:40:22.188271  465459 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:40:22.216515  465459 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:40:22.258578  465459 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:40:22.259036  465459 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-262280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:40:22.271087  465459 kubeadm.go:319] [bootstrap-token] Using token: 2yptao.r7yd6l7ev1yowcqn
	I1124 03:40:22.274016  465459 out.go:252]   - Configuring RBAC rules ...
	I1124 03:40:22.274139  465459 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:40:22.285868  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:40:22.302245  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:40:22.309475  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:40:22.314669  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:40:22.324840  465459 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:40:22.533610  465459 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:40:22.993832  465459 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:40:23.539106  465459 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:40:23.540728  465459 kubeadm.go:319] 
	I1124 03:40:23.540809  465459 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:40:23.540814  465459 kubeadm.go:319] 
	I1124 03:40:23.540891  465459 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:40:23.540895  465459 kubeadm.go:319] 
	I1124 03:40:23.540920  465459 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:40:23.541365  465459 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:40:23.541428  465459 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:40:23.541434  465459 kubeadm.go:319] 
	I1124 03:40:23.541487  465459 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:40:23.541491  465459 kubeadm.go:319] 
	I1124 03:40:23.541539  465459 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:40:23.541542  465459 kubeadm.go:319] 
	I1124 03:40:23.541594  465459 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:40:23.541669  465459 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:40:23.541737  465459 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:40:23.541741  465459 kubeadm.go:319] 
	I1124 03:40:23.542069  465459 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:40:23.542155  465459 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:40:23.542159  465459 kubeadm.go:319] 
	I1124 03:40:23.542500  465459 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2yptao.r7yd6l7ev1yowcqn \
	I1124 03:40:23.542614  465459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:40:23.542853  465459 kubeadm.go:319] 	--control-plane 
	I1124 03:40:23.542871  465459 kubeadm.go:319] 
	I1124 03:40:23.543221  465459 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:40:23.543231  465459 kubeadm.go:319] 
	I1124 03:40:23.547828  465459 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2yptao.r7yd6l7ev1yowcqn \
	I1124 03:40:23.550982  465459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:40:23.555511  465459 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:40:23.555736  465459 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:40:23.555841  465459 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:40:23.555857  465459 cni.go:84] Creating CNI manager for ""
	I1124 03:40:23.555865  465459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:23.559067  465459 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:40:19.836180  468607 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:40:19.836307  468607 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:40:20.837911  468607 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001791556s
	I1124 03:40:20.841824  468607 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:40:20.841924  468607 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 03:40:20.842025  468607 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:40:20.842109  468607 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:40:23.561962  465459 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:40:23.570649  465459 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:40:23.570666  465459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:40:23.611043  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:40:24.448553  465459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:40:24.448680  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:24.448750  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-262280 minikube.k8s.io/updated_at=2025_11_24T03_40_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-262280 minikube.k8s.io/primary=true
	I1124 03:40:25.025787  465459 ops.go:34] apiserver oom_adj: -16
	I1124 03:40:25.025937  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:25.526394  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:26.025997  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:26.526754  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:27.026641  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:27.253055  465459 kubeadm.go:1114] duration metric: took 2.804418537s to wait for elevateKubeSystemPrivileges
	I1124 03:40:27.253082  465459 kubeadm.go:403] duration metric: took 24.002425527s to StartCluster
	I1124 03:40:27.253101  465459 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:27.253165  465459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:40:27.253834  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:27.254034  465459 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:40:27.254180  465459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:40:27.254424  465459 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:40:27.254486  465459 addons.go:70] Setting storage-provisioner=true in profile "no-preload-262280"
	I1124 03:40:27.254500  465459 addons.go:239] Setting addon storage-provisioner=true in "no-preload-262280"
	I1124 03:40:27.254522  465459 host.go:66] Checking if "no-preload-262280" exists ...
	I1124 03:40:27.255029  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.255348  465459 config.go:182] Loaded profile config "no-preload-262280": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:27.255425  465459 addons.go:70] Setting default-storageclass=true in profile "no-preload-262280"
	I1124 03:40:27.255459  465459 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-262280"
	I1124 03:40:27.255742  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.258534  465459 out.go:179] * Verifying Kubernetes components...
	I1124 03:40:27.264721  465459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:27.290687  465459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:40:27.293638  465459 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:27.293665  465459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:40:27.293734  465459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-262280
	I1124 03:40:27.295179  465459 addons.go:239] Setting addon default-storageclass=true in "no-preload-262280"
	I1124 03:40:27.295223  465459 host.go:66] Checking if "no-preload-262280" exists ...
	I1124 03:40:27.295646  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.333873  465459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/no-preload-262280/id_rsa Username:docker}
	I1124 03:40:27.342194  465459 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:27.342217  465459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:40:27.342282  465459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-262280
	I1124 03:40:27.369752  465459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/no-preload-262280/id_rsa Username:docker}
	I1124 03:40:28.289510  468607 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.446711872s
	I1124 03:40:28.718064  468607 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.876138727s
	I1124 03:40:28.086729  465459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:28.166898  465459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:40:28.167031  465459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:28.202605  465459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:29.603255  465459 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.436193485s)
	I1124 03:40:29.604024  465459 node_ready.go:35] waiting up to 6m0s for node "no-preload-262280" to be "Ready" ...
	I1124 03:40:29.604243  465459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.437316052s)
	I1124 03:40:29.604267  465459 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:40:30.149139  465459 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-262280" context rescaled to 1 replicas
	I1124 03:40:30.266899  465459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.064217856s)
	I1124 03:40:30.272444  465459 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 03:40:30.843974  468607 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002059314s
	I1124 03:40:30.870609  468607 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:40:30.901638  468607 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:40:30.924179  468607 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:40:30.924719  468607 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-818836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:40:30.940184  468607 kubeadm.go:319] [bootstrap-token] Using token: 0bimeo.bzidkyv9i8e7nkw3
	I1124 03:40:30.943266  468607 out.go:252]   - Configuring RBAC rules ...
	I1124 03:40:30.943387  468607 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:40:30.951610  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:40:30.963677  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:40:30.971959  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:40:30.977923  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:40:30.986249  468607 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:40:31.251471  468607 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:40:31.778202  468607 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:40:32.251684  468607 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:40:32.253477  468607 kubeadm.go:319] 
	I1124 03:40:32.253550  468607 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:40:32.253555  468607 kubeadm.go:319] 
	I1124 03:40:32.253632  468607 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:40:32.253637  468607 kubeadm.go:319] 
	I1124 03:40:32.253662  468607 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:40:32.254164  468607 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:40:32.254227  468607 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:40:32.254231  468607 kubeadm.go:319] 
	I1124 03:40:32.254285  468607 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:40:32.254288  468607 kubeadm.go:319] 
	I1124 03:40:32.254336  468607 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:40:32.254339  468607 kubeadm.go:319] 
	I1124 03:40:32.254391  468607 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:40:32.254466  468607 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:40:32.254534  468607 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:40:32.254538  468607 kubeadm.go:319] 
	I1124 03:40:32.254839  468607 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:40:32.254921  468607 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:40:32.254928  468607 kubeadm.go:319] 
	I1124 03:40:32.255259  468607 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0bimeo.bzidkyv9i8e7nkw3 \
	I1124 03:40:32.255368  468607 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:40:32.255600  468607 kubeadm.go:319] 	--control-plane 
	I1124 03:40:32.255610  468607 kubeadm.go:319] 
	I1124 03:40:32.255896  468607 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:40:32.255905  468607 kubeadm.go:319] 
	I1124 03:40:32.256198  468607 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0bimeo.bzidkyv9i8e7nkw3 \
	I1124 03:40:32.256558  468607 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:40:32.262002  468607 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:40:32.262227  468607 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:40:32.262331  468607 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:40:32.262347  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:40:32.262355  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:32.265575  468607 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:40:30.275374  465459 addons.go:530] duration metric: took 3.020937085s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1124 03:40:31.607716  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:32.268802  468607 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:40:32.276058  468607 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:40:32.276076  468607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:40:32.304040  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:40:32.950060  468607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:40:32.950194  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:32.950260  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-818836 minikube.k8s.io/updated_at=2025_11_24T03_40_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-818836 minikube.k8s.io/primary=true
	I1124 03:40:33.247296  468607 ops.go:34] apiserver oom_adj: -16
	I1124 03:40:33.247413  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:33.747810  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:34.247563  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:34.747727  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:35.248529  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:35.747874  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:36.248065  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:36.747517  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:37.248357  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:37.375914  468607 kubeadm.go:1114] duration metric: took 4.425764478s to wait for elevateKubeSystemPrivileges
	I1124 03:40:37.375948  468607 kubeadm.go:403] duration metric: took 26.189049705s to StartCluster
	I1124 03:40:37.375965  468607 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:37.376029  468607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:40:37.377428  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:37.377669  468607 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:40:37.377785  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:40:37.378042  468607 config.go:182] Loaded profile config "embed-certs-818836": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:37.378089  468607 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:40:37.378159  468607 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-818836"
	I1124 03:40:37.378172  468607 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-818836"
	I1124 03:40:37.378198  468607 host.go:66] Checking if "embed-certs-818836" exists ...
	I1124 03:40:37.378697  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.378976  468607 addons.go:70] Setting default-storageclass=true in profile "embed-certs-818836"
	I1124 03:40:37.379003  468607 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-818836"
	I1124 03:40:37.379254  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.381419  468607 out.go:179] * Verifying Kubernetes components...
	I1124 03:40:37.384428  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:37.421715  468607 addons.go:239] Setting addon default-storageclass=true in "embed-certs-818836"
	I1124 03:40:37.421763  468607 host.go:66] Checking if "embed-certs-818836" exists ...
	I1124 03:40:37.422190  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.443094  468607 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:40:34.107205  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	W1124 03:40:36.107495  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:37.445972  468607 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:37.445995  468607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:40:37.446062  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:37.468083  468607 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:37.468107  468607 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:40:37.468173  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:37.505843  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:37.512810  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:37.807453  468607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:37.824901  468607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:37.825083  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:40:37.844459  468607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:38.592240  468607 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 03:40:38.594605  468607 node_ready.go:35] waiting up to 6m0s for node "embed-certs-818836" to be "Ready" ...
	I1124 03:40:38.651892  468607 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:40:38.655002  468607 addons.go:530] duration metric: took 1.276905995s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:40:39.096916  468607 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-818836" context rescaled to 1 replicas
	W1124 03:40:38.606995  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	W1124 03:40:40.607344  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:42.608225  465459 node_ready.go:49] node "no-preload-262280" is "Ready"
	I1124 03:40:42.608272  465459 node_ready.go:38] duration metric: took 13.004210314s for node "no-preload-262280" to be "Ready" ...
	I1124 03:40:42.608287  465459 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:40:42.608350  465459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:42.623406  465459 api_server.go:72] duration metric: took 15.369343221s to wait for apiserver process to appear ...
	I1124 03:40:42.623436  465459 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:40:42.623469  465459 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:40:42.633313  465459 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:40:42.634411  465459 api_server.go:141] control plane version: v1.34.1
	I1124 03:40:42.634433  465459 api_server.go:131] duration metric: took 10.990663ms to wait for apiserver health ...
	I1124 03:40:42.634442  465459 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:40:42.638347  465459 system_pods.go:59] 8 kube-system pods found
	I1124 03:40:42.638381  465459 system_pods.go:61] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.638387  465459 system_pods.go:61] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.638392  465459 system_pods.go:61] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.638396  465459 system_pods.go:61] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.638401  465459 system_pods.go:61] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.638404  465459 system_pods.go:61] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.638407  465459 system_pods.go:61] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.638413  465459 system_pods.go:61] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:42.638420  465459 system_pods.go:74] duration metric: took 3.972643ms to wait for pod list to return data ...
	I1124 03:40:42.638431  465459 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:40:42.641761  465459 default_sa.go:45] found service account: "default"
	I1124 03:40:42.641824  465459 default_sa.go:55] duration metric: took 3.386704ms for default service account to be created ...
	I1124 03:40:42.641868  465459 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:40:42.645101  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:42.645134  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.645141  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.645147  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.645155  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.645160  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.645164  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.645168  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.645173  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:42.645193  465459 retry.go:31] will retry after 242.077653ms: missing components: kube-dns
	I1124 03:40:42.893628  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:42.893678  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.893684  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.893699  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.893704  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.893709  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.893713  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.893716  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.893720  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:42.893822  465459 retry.go:31] will retry after 373.532935ms: missing components: kube-dns
	W1124 03:40:40.597355  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:42.597817  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:44.598213  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	I1124 03:40:43.271122  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:43.271161  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:43.271172  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:43.271178  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:43.271182  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:43.271187  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:43.271191  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:43.271195  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:43.271206  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:43.271221  465459 retry.go:31] will retry after 322.6325ms: missing components: kube-dns
	I1124 03:40:43.599918  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:43.600007  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:43.600023  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:43.600030  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:43.600035  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:43.600040  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:43.600044  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:43.600048  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:43.600051  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:43.600066  465459 retry.go:31] will retry after 394.949668ms: missing components: kube-dns
	I1124 03:40:44.001892  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:44.001938  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Running
	I1124 03:40:44.001946  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:44.001952  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:44.001960  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:44.001965  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:44.001968  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:44.001972  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:44.001976  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:44.001989  465459 system_pods.go:126] duration metric: took 1.36009666s to wait for k8s-apps to be running ...
	I1124 03:40:44.001998  465459 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:40:44.002065  465459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:40:44.023562  465459 system_svc.go:56] duration metric: took 21.553336ms WaitForService to wait for kubelet
	I1124 03:40:44.023598  465459 kubeadm.go:587] duration metric: took 16.769539879s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:40:44.023618  465459 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:40:44.027009  465459 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:40:44.027046  465459 node_conditions.go:123] node cpu capacity is 2
	I1124 03:40:44.027060  465459 node_conditions.go:105] duration metric: took 3.437042ms to run NodePressure ...
	I1124 03:40:44.027074  465459 start.go:242] waiting for startup goroutines ...
	I1124 03:40:44.027110  465459 start.go:247] waiting for cluster config update ...
	I1124 03:40:44.027129  465459 start.go:256] writing updated cluster config ...
	I1124 03:40:44.027439  465459 ssh_runner.go:195] Run: rm -f paused
	I1124 03:40:44.032809  465459 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:44.036889  465459 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mj9gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.042142  465459 pod_ready.go:94] pod "coredns-66bc5c9577-mj9gd" is "Ready"
	I1124 03:40:44.042172  465459 pod_ready.go:86] duration metric: took 5.207096ms for pod "coredns-66bc5c9577-mj9gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.044894  465459 pod_ready.go:83] waiting for pod "etcd-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.050138  465459 pod_ready.go:94] pod "etcd-no-preload-262280" is "Ready"
	I1124 03:40:44.050222  465459 pod_ready.go:86] duration metric: took 5.300135ms for pod "etcd-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.052994  465459 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.057831  465459 pod_ready.go:94] pod "kube-apiserver-no-preload-262280" is "Ready"
	I1124 03:40:44.057868  465459 pod_ready.go:86] duration metric: took 4.8387ms for pod "kube-apiserver-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.060783  465459 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.437093  465459 pod_ready.go:94] pod "kube-controller-manager-no-preload-262280" is "Ready"
	I1124 03:40:44.437124  465459 pod_ready.go:86] duration metric: took 376.313274ms for pod "kube-controller-manager-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.637747  465459 pod_ready.go:83] waiting for pod "kube-proxy-xg8w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.042982  465459 pod_ready.go:94] pod "kube-proxy-xg8w4" is "Ready"
	I1124 03:40:45.043021  465459 pod_ready.go:86] duration metric: took 405.246191ms for pod "kube-proxy-xg8w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.238605  465459 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.636771  465459 pod_ready.go:94] pod "kube-scheduler-no-preload-262280" is "Ready"
	I1124 03:40:45.636842  465459 pod_ready.go:86] duration metric: took 398.208005ms for pod "kube-scheduler-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.636877  465459 pod_ready.go:40] duration metric: took 1.604024878s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:45.700045  465459 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:40:45.703311  465459 out.go:179] * Done! kubectl is now configured to use "no-preload-262280" cluster and "default" namespace by default
	W1124 03:40:47.097978  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:49.098467  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	I1124 03:40:49.600289  468607 node_ready.go:49] node "embed-certs-818836" is "Ready"
	I1124 03:40:49.600325  468607 node_ready.go:38] duration metric: took 11.005685237s for node "embed-certs-818836" to be "Ready" ...
	I1124 03:40:49.600342  468607 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:40:49.600401  468607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:49.616102  468607 api_server.go:72] duration metric: took 12.238396901s to wait for apiserver process to appear ...
	I1124 03:40:49.616131  468607 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:40:49.616151  468607 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:40:49.625663  468607 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 03:40:49.628248  468607 api_server.go:141] control plane version: v1.34.1
	I1124 03:40:49.628298  468607 api_server.go:131] duration metric: took 12.158646ms to wait for apiserver health ...
	I1124 03:40:49.628308  468607 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:40:49.635456  468607 system_pods.go:59] 8 kube-system pods found
	I1124 03:40:49.635501  468607 system_pods.go:61] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.635509  468607 system_pods.go:61] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.635527  468607 system_pods.go:61] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.635531  468607 system_pods.go:61] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.635536  468607 system_pods.go:61] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.635542  468607 system_pods.go:61] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.635546  468607 system_pods.go:61] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.635559  468607 system_pods.go:61] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.635566  468607 system_pods.go:74] duration metric: took 7.25158ms to wait for pod list to return data ...
	I1124 03:40:49.635579  468607 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:40:49.639861  468607 default_sa.go:45] found service account: "default"
	I1124 03:40:49.639903  468607 default_sa.go:55] duration metric: took 4.317754ms for default service account to be created ...
	I1124 03:40:49.639914  468607 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:40:49.642908  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:49.642943  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.642950  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.642956  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.642961  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.642975  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.642979  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.642984  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.642992  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.643018  468607 retry.go:31] will retry after 271.674831ms: missing components: kube-dns
	I1124 03:40:49.919376  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:49.919415  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.919423  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.919429  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.919435  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.919440  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.919444  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.919448  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.919455  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.919474  468607 retry.go:31] will retry after 335.268613ms: missing components: kube-dns
	I1124 03:40:50.262160  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:50.262218  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:50.262226  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:50.262264  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:50.262281  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:50.262290  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:50.262298  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:50.262302  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:50.262312  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:50.262349  468607 retry.go:31] will retry after 385.617551ms: missing components: kube-dns
	I1124 03:40:50.651970  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:50.652010  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:50.652018  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:50.652025  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:50.652030  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:50.652034  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:50.652038  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:50.652041  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:50.652047  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:50.652064  468607 retry.go:31] will retry after 470.580451ms: missing components: kube-dns
	I1124 03:40:51.133462  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:51.133497  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Running
	I1124 03:40:51.133504  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:51.133509  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:51.133514  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:51.133518  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:51.133528  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:51.133533  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:51.133538  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Running
	I1124 03:40:51.133558  468607 system_pods.go:126] duration metric: took 1.493636996s to wait for k8s-apps to be running ...
	I1124 03:40:51.133566  468607 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:40:51.133625  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:40:51.151193  468607 system_svc.go:56] duration metric: took 17.617707ms WaitForService to wait for kubelet
	I1124 03:40:51.151222  468607 kubeadm.go:587] duration metric: took 13.773521156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:40:51.151242  468607 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:40:51.158998  468607 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:40:51.159035  468607 node_conditions.go:123] node cpu capacity is 2
	I1124 03:40:51.159163  468607 node_conditions.go:105] duration metric: took 7.914387ms to run NodePressure ...
	I1124 03:40:51.159180  468607 start.go:242] waiting for startup goroutines ...
	I1124 03:40:51.159201  468607 start.go:247] waiting for cluster config update ...
	I1124 03:40:51.159225  468607 start.go:256] writing updated cluster config ...
	I1124 03:40:51.159566  468607 ssh_runner.go:195] Run: rm -f paused
	I1124 03:40:51.163938  468607 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:51.233364  468607 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dgvvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.238633  468607 pod_ready.go:94] pod "coredns-66bc5c9577-dgvvg" is "Ready"
	I1124 03:40:51.238668  468607 pod_ready.go:86] duration metric: took 5.226756ms for pod "coredns-66bc5c9577-dgvvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.242048  468607 pod_ready.go:83] waiting for pod "etcd-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.247506  468607 pod_ready.go:94] pod "etcd-embed-certs-818836" is "Ready"
	I1124 03:40:51.247534  468607 pod_ready.go:86] duration metric: took 5.457921ms for pod "etcd-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.250505  468607 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.256168  468607 pod_ready.go:94] pod "kube-apiserver-embed-certs-818836" is "Ready"
	I1124 03:40:51.256200  468607 pod_ready.go:86] duration metric: took 5.665265ms for pod "kube-apiserver-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.258827  468607 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.568969  468607 pod_ready.go:94] pod "kube-controller-manager-embed-certs-818836" is "Ready"
	I1124 03:40:51.568996  468607 pod_ready.go:86] duration metric: took 310.144443ms for pod "kube-controller-manager-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.768346  468607 pod_ready.go:83] waiting for pod "kube-proxy-kqtwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.168601  468607 pod_ready.go:94] pod "kube-proxy-kqtwg" is "Ready"
	I1124 03:40:52.168630  468607 pod_ready.go:86] duration metric: took 400.250484ms for pod "kube-proxy-kqtwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.369520  468607 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.768587  468607 pod_ready.go:94] pod "kube-scheduler-embed-certs-818836" is "Ready"
	I1124 03:40:52.768616  468607 pod_ready.go:86] duration metric: took 399.065879ms for pod "kube-scheduler-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.768629  468607 pod_ready.go:40] duration metric: took 1.604655617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:52.832190  468607 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:40:52.835417  468607 out.go:179] * Done! kubectl is now configured to use "embed-certs-818836" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6efc4ca7860c3       1611cd07b61d5       6 seconds ago       Running             busybox                   0                   e3c77ca9f7fed       busybox                                     default
	1c714ace422b1       138784d87c9c5       12 seconds ago      Running             coredns                   0                   bdfbadfad1ed4       coredns-66bc5c9577-mj9gd                    kube-system
	be397b4afce85       66749159455b3       12 seconds ago      Running             storage-provisioner       0                   e34d4f2fbf3ee       storage-provisioner                         kube-system
	cf95865919242       b1a8c6f707935       23 seconds ago      Running             kindnet-cni               0                   d93319665440f       kindnet-tp8zg                               kube-system
	95117708edab7       05baa95f5142d       26 seconds ago      Running             kube-proxy                0                   fb2356fed4bdf       kube-proxy-xg8w4                            kube-system
	02103f0046d80       7eb2c6ff0c5a7       42 seconds ago      Running             kube-controller-manager   0                   2ff2010f77339       kube-controller-manager-no-preload-262280   kube-system
	023306d10623d       b5f57ec6b9867       42 seconds ago      Running             kube-scheduler            0                   ec132d9c3aaed       kube-scheduler-no-preload-262280            kube-system
	0f0cdb21b9f41       a1894772a478e       42 seconds ago      Running             etcd                      0                   2d120cb1cb5d4       etcd-no-preload-262280                      kube-system
	e4efe89b5bce7       43911e833d64d       42 seconds ago      Running             kube-apiserver            0                   b5d0668db9e9a       kube-apiserver-no-preload-262280            kube-system
	
	
	==> containerd <==
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.673916562Z" level=info msg="connecting to shim be397b4afce8525a05276b3a7b1dc032772656b57834670e5afd4dcec6228318" address="unix:///run/containerd/s/5f43b859443edee740bd578455a75a950a1789dc846e6f3612bae93fffa56e11" protocol=ttrpc version=3
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.744969798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mj9gd,Uid:875322e9-dddd-4618-beec-76c737d16e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdfbadfad1ed4b60d5835a593e40a10928671d9e0bc8316e4d9738e714ea8896\""
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.752777131Z" level=info msg="CreateContainer within sandbox \"bdfbadfad1ed4b60d5835a593e40a10928671d9e0bc8316e4d9738e714ea8896\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.774840492Z" level=info msg="StartContainer for \"be397b4afce8525a05276b3a7b1dc032772656b57834670e5afd4dcec6228318\" returns successfully"
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.776759609Z" level=info msg="Container 1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.797295799Z" level=info msg="CreateContainer within sandbox \"bdfbadfad1ed4b60d5835a593e40a10928671d9e0bc8316e4d9738e714ea8896\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4\""
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.798900666Z" level=info msg="StartContainer for \"1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4\""
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.800022108Z" level=info msg="connecting to shim 1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4" address="unix:///run/containerd/s/31aacace18df1bf3670145bc73b7dbb48260829092de200778000bfeacdae2de" protocol=ttrpc version=3
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.909396460Z" level=info msg="StartContainer for \"1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4\" returns successfully"
	Nov 24 03:40:46 no-preload-262280 containerd[760]: time="2025-11-24T03:40:46.264416826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:820858e8-9815-41a7-a6c3-43bbfe947f4b,Namespace:default,Attempt:0,}"
	Nov 24 03:40:46 no-preload-262280 containerd[760]: time="2025-11-24T03:40:46.352645822Z" level=info msg="connecting to shim e3c77ca9f7fedf585c665134c5d43e1daa25554bbc4a8d867de7dee57a3e939f" address="unix:///run/containerd/s/e5560b01ec9b1eea8540256781f39da7201a3b88ccb73d2ed3cc50bdd8ed3a4f" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:40:46 no-preload-262280 containerd[760]: time="2025-11-24T03:40:46.415389667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:820858e8-9815-41a7-a6c3-43bbfe947f4b,Namespace:default,Attempt:0,} returns sandbox id \"e3c77ca9f7fedf585c665134c5d43e1daa25554bbc4a8d867de7dee57a3e939f\""
	Nov 24 03:40:46 no-preload-262280 containerd[760]: time="2025-11-24T03:40:46.419043474Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.519221325Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.520965672Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937190"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.523440978Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.526886192Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.527771859Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.108679392s"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.527812926Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.536142371Z" level=info msg="CreateContainer within sandbox \"e3c77ca9f7fedf585c665134c5d43e1daa25554bbc4a8d867de7dee57a3e939f\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.551658822Z" level=info msg="Container 6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.563797209Z" level=info msg="CreateContainer within sandbox \"e3c77ca9f7fedf585c665134c5d43e1daa25554bbc4a8d867de7dee57a3e939f\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f\""
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.565080737Z" level=info msg="StartContainer for \"6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f\""
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.566666223Z" level=info msg="connecting to shim 6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f" address="unix:///run/containerd/s/e5560b01ec9b1eea8540256781f39da7201a3b88ccb73d2ed3cc50bdd8ed3a4f" protocol=ttrpc version=3
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.632395433Z" level=info msg="StartContainer for \"6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f\" returns successfully"
	
	
	==> coredns [1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47901 - 61057 "HINFO IN 2850791332031184546.6905526921411133570. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028132624s
	
	
	==> describe nodes <==
	Name:               no-preload-262280
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-262280
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-262280
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_40_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:40:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-262280
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:40:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:40:54 +0000   Mon, 24 Nov 2025 03:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:40:54 +0000   Mon, 24 Nov 2025 03:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:40:54 +0000   Mon, 24 Nov 2025 03:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:40:54 +0000   Mon, 24 Nov 2025 03:40:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-262280
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                79fbca72-e570-478b-819a-4e66cc7dc3e1
	  Boot ID:                    63a8a852-1462-44b1-9d6f-f77d26e8568f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-mj9gd                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-262280                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         33s
	  kube-system                 kindnet-tp8zg                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-262280             250m (12%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-no-preload-262280    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-xg8w4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-262280             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 25s                kube-proxy       
	  Warning  CgroupV1                 44s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node no-preload-262280 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node no-preload-262280 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x7 over 44s)  kubelet          Node no-preload-262280 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  44s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 32s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 32s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node no-preload-262280 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node no-preload-262280 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node no-preload-262280 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node no-preload-262280 event: Registered Node no-preload-262280 in Controller
	  Normal   NodeReady                13s                kubelet          Node no-preload-262280 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:27] overlayfs: idmapped layers are currently not supported
	[Nov24 02:28] overlayfs: idmapped layers are currently not supported
	[Nov24 02:30] overlayfs: idmapped layers are currently not supported
	[  +9.824160] overlayfs: idmapped layers are currently not supported
	[Nov24 02:31] overlayfs: idmapped layers are currently not supported
	[Nov24 02:32] overlayfs: idmapped layers are currently not supported
	[ +27.981383] overlayfs: idmapped layers are currently not supported
	[Nov24 02:33] overlayfs: idmapped layers are currently not supported
	[Nov24 02:34] overlayfs: idmapped layers are currently not supported
	[Nov24 02:35] overlayfs: idmapped layers are currently not supported
	[Nov24 02:36] overlayfs: idmapped layers are currently not supported
	[Nov24 02:37] overlayfs: idmapped layers are currently not supported
	[Nov24 02:38] overlayfs: idmapped layers are currently not supported
	[Nov24 02:39] overlayfs: idmapped layers are currently not supported
	[ +24.837346] overlayfs: idmapped layers are currently not supported
	[Nov24 02:40] overlayfs: idmapped layers are currently not supported
	[ +40.823948] overlayfs: idmapped layers are currently not supported
	[  +1.705989] overlayfs: idmapped layers are currently not supported
	[Nov24 02:42] overlayfs: idmapped layers are currently not supported
	[ +21.661904] overlayfs: idmapped layers are currently not supported
	[Nov24 02:44] overlayfs: idmapped layers are currently not supported
	[  +1.074777] overlayfs: idmapped layers are currently not supported
	[Nov24 02:46] overlayfs: idmapped layers are currently not supported
	[ +19.120392] overlayfs: idmapped layers are currently not supported
	[Nov24 02:48] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [0f0cdb21b9f416ec70e9a682e42f7629ec439caab6d4dd3070e91e4f3347f9a2] <==
	{"level":"warn","ts":"2025-11-24T03:40:15.995514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.078501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.148576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.200007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.229173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.249904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.272081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.302528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.384668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.452176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.487364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.532673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.574626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.624335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.666850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.724683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.762223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.801085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.853747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.908737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.952837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.982649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:17.046709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:17.251173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54270","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T03:40:27.963843Z","caller":"traceutil/trace.go:172","msg":"trace[212483648] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"104.467992ms","start":"2025-11-24T03:40:27.859342Z","end":"2025-11-24T03:40:27.963810Z","steps":["trace[212483648] 'process raft request'  (duration: 104.124482ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:40:55 up  2:23,  0 user,  load average: 5.26, 3.85, 3.06
	Linux no-preload-262280 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cf95865919242f79d08de7186f2a000f985534d1820a8d95476ba6c09013ab0f] <==
	I1124 03:40:31.735103       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:40:31.824956       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:40:31.825150       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:40:31.825174       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:40:31.825186       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:40:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:40:32.034348       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:40:32.034540       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:40:32.034633       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:40:32.036186       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:40:32.236136       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:40:32.236354       1 metrics.go:72] Registering metrics
	I1124 03:40:32.236555       1 controller.go:711] "Syncing nftables rules"
	I1124 03:40:42.040458       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:40:42.040566       1 main.go:301] handling current node
	I1124 03:40:52.033937       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:40:52.033974       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e4efe89b5bce740d11c2842c5d1aa62daf0e473f21d4b8b8d2e641a31d84cf81] <==
	I1124 03:40:18.733216       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 03:40:18.736979       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 03:40:18.741837       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 03:40:18.809158       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:18.809450       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:40:18.853295       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:18.859753       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:40:19.349193       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:40:19.369653       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:40:19.369869       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:40:20.727359       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:40:20.799911       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:40:20.887437       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:40:20.901043       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:40:20.902503       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:40:20.908844       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:40:21.369264       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:40:22.971714       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:40:22.991881       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:40:23.014896       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:40:26.962737       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:40:27.292544       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:40:27.445268       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:27.487615       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 03:40:54.131455       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:39788: use of closed network connection
	
	
	==> kube-controller-manager [02103f0046d800181ffcbc86a11e512c20c091ee5db459b81d7dae1343cef3dc] <==
	I1124 03:40:26.408553       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:40:26.409235       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:40:26.408572       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 03:40:26.411044       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:40:26.419238       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:40:26.419453       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-262280" podCIDRs=["10.244.0.0/24"]
	I1124 03:40:26.419593       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:40:26.429203       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:40:26.429470       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 03:40:26.436511       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:40:26.436532       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:40:26.445905       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:26.451518       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:40:26.456968       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:40:26.459061       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 03:40:26.459346       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 03:40:26.459555       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-262280"
	I1124 03:40:26.459714       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 03:40:26.462852       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:40:26.467002       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:40:26.474122       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:40:26.486109       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:26.486324       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:40:26.486417       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:40:46.463443       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [95117708edab7db4f15203f63afc7cd4e58237d7c305b8ce721c7cf427b80ce3] <==
	I1124 03:40:28.981504       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:40:29.094351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:40:29.259048       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:40:29.259090       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:40:29.259161       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:40:29.531670       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:40:29.531728       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:40:29.567086       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:40:29.583090       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:40:29.583123       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:40:29.585641       1 config.go:200] "Starting service config controller"
	I1124 03:40:29.585658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:40:29.596249       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:40:29.596312       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:40:29.596339       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:40:29.596344       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:40:29.620963       1 config.go:309] "Starting node config controller"
	I1124 03:40:29.620982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:40:29.621008       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:40:29.697590       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:40:29.697628       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:40:29.697682       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [023306d10623d671ab0ee9497a7be725838e9df82aac87a9f0e200807f3272dc] <==
	I1124 03:40:18.763708       1 serving.go:386] Generated self-signed cert in-memory
	I1124 03:40:22.062823       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:40:22.062864       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:40:22.077992       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:40:22.079863       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:22.079880       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 03:40:22.133672       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 03:40:22.079888       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:40:22.079824       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 03:40:22.140708       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 03:40:22.140915       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:22.233951       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 03:40:22.243674       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 03:40:22.243674       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:40:24 no-preload-262280 kubelet[2119]: I1124 03:40:24.185464    2119 apiserver.go:52] "Watching apiserver"
	Nov 24 03:40:24 no-preload-262280 kubelet[2119]: I1124 03:40:24.301633    2119 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 03:40:24 no-preload-262280 kubelet[2119]: I1124 03:40:24.628833    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-262280" podStartSLOduration=1.628812923 podStartE2EDuration="1.628812923s" podCreationTimestamp="2025-11-24 03:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:24.585985084 +0000 UTC m=+1.656531374" watchObservedRunningTime="2025-11-24 03:40:24.628812923 +0000 UTC m=+1.699359082"
	Nov 24 03:40:24 no-preload-262280 kubelet[2119]: I1124 03:40:24.708820    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-262280" podStartSLOduration=1.708803304 podStartE2EDuration="1.708803304s" podCreationTimestamp="2025-11-24 03:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:24.69031654 +0000 UTC m=+1.760862699" watchObservedRunningTime="2025-11-24 03:40:24.708803304 +0000 UTC m=+1.779349471"
	Nov 24 03:40:26 no-preload-262280 kubelet[2119]: I1124 03:40:26.439375    2119 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:40:26 no-preload-262280 kubelet[2119]: I1124 03:40:26.441272    2119 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.632400    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8388de5-8f36-444e-864f-efe3b946972c-xtables-lock\") pod \"kube-proxy-xg8w4\" (UID: \"e8388de5-8f36-444e-864f-efe3b946972c\") " pod="kube-system/kube-proxy-xg8w4"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.632448    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmxgz\" (UniqueName: \"kubernetes.io/projected/8b8b163b-5585-4d91-9717-95f656987530-kube-api-access-gmxgz\") pod \"kindnet-tp8zg\" (UID: \"8b8b163b-5585-4d91-9717-95f656987530\") " pod="kube-system/kindnet-tp8zg"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635779    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8388de5-8f36-444e-864f-efe3b946972c-kube-proxy\") pod \"kube-proxy-xg8w4\" (UID: \"e8388de5-8f36-444e-864f-efe3b946972c\") " pod="kube-system/kube-proxy-xg8w4"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635835    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8388de5-8f36-444e-864f-efe3b946972c-lib-modules\") pod \"kube-proxy-xg8w4\" (UID: \"e8388de5-8f36-444e-864f-efe3b946972c\") " pod="kube-system/kube-proxy-xg8w4"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635863    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtt8n\" (UniqueName: \"kubernetes.io/projected/e8388de5-8f36-444e-864f-efe3b946972c-kube-api-access-wtt8n\") pod \"kube-proxy-xg8w4\" (UID: \"e8388de5-8f36-444e-864f-efe3b946972c\") " pod="kube-system/kube-proxy-xg8w4"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635898    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8b163b-5585-4d91-9717-95f656987530-xtables-lock\") pod \"kindnet-tp8zg\" (UID: \"8b8b163b-5585-4d91-9717-95f656987530\") " pod="kube-system/kindnet-tp8zg"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635924    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b8b163b-5585-4d91-9717-95f656987530-cni-cfg\") pod \"kindnet-tp8zg\" (UID: \"8b8b163b-5585-4d91-9717-95f656987530\") " pod="kube-system/kindnet-tp8zg"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635949    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8b163b-5585-4d91-9717-95f656987530-lib-modules\") pod \"kindnet-tp8zg\" (UID: \"8b8b163b-5585-4d91-9717-95f656987530\") " pod="kube-system/kindnet-tp8zg"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.886517    2119 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 03:40:31 no-preload-262280 kubelet[2119]: I1124 03:40:31.436921    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xg8w4" podStartSLOduration=4.43690131 podStartE2EDuration="4.43690131s" podCreationTimestamp="2025-11-24 03:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:29.85323751 +0000 UTC m=+6.923783677" watchObservedRunningTime="2025-11-24 03:40:31.43690131 +0000 UTC m=+8.507447477"
	Nov 24 03:40:31 no-preload-262280 kubelet[2119]: I1124 03:40:31.892039    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tp8zg" podStartSLOduration=1.758004291 podStartE2EDuration="4.892009095s" podCreationTimestamp="2025-11-24 03:40:27 +0000 UTC" firstStartedPulling="2025-11-24 03:40:28.285911897 +0000 UTC m=+5.356458055" lastFinishedPulling="2025-11-24 03:40:31.4199167 +0000 UTC m=+8.490462859" observedRunningTime="2025-11-24 03:40:31.891154206 +0000 UTC m=+8.961700389" watchObservedRunningTime="2025-11-24 03:40:31.892009095 +0000 UTC m=+8.962555254"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.110387    2119 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.249012    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/430685c9-d2cd-4da8-90bb-666070ea7af5-tmp\") pod \"storage-provisioner\" (UID: \"430685c9-d2cd-4da8-90bb-666070ea7af5\") " pod="kube-system/storage-provisioner"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.249082    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjsc\" (UniqueName: \"kubernetes.io/projected/430685c9-d2cd-4da8-90bb-666070ea7af5-kube-api-access-wrjsc\") pod \"storage-provisioner\" (UID: \"430685c9-d2cd-4da8-90bb-666070ea7af5\") " pod="kube-system/storage-provisioner"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.349503    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6vg\" (UniqueName: \"kubernetes.io/projected/875322e9-dddd-4618-beec-76c737d16e3c-kube-api-access-2m6vg\") pod \"coredns-66bc5c9577-mj9gd\" (UID: \"875322e9-dddd-4618-beec-76c737d16e3c\") " pod="kube-system/coredns-66bc5c9577-mj9gd"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.349578    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/875322e9-dddd-4618-beec-76c737d16e3c-config-volume\") pod \"coredns-66bc5c9577-mj9gd\" (UID: \"875322e9-dddd-4618-beec-76c737d16e3c\") " pod="kube-system/coredns-66bc5c9577-mj9gd"
	Nov 24 03:40:43 no-preload-262280 kubelet[2119]: I1124 03:40:43.888540    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.888519456000001 podStartE2EDuration="13.888519456s" podCreationTimestamp="2025-11-24 03:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:42.889027107 +0000 UTC m=+19.959573274" watchObservedRunningTime="2025-11-24 03:40:43.888519456 +0000 UTC m=+20.959065623"
	Nov 24 03:40:43 no-preload-262280 kubelet[2119]: I1124 03:40:43.907263    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mj9gd" podStartSLOduration=16.907233513 podStartE2EDuration="16.907233513s" podCreationTimestamp="2025-11-24 03:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:43.889176625 +0000 UTC m=+20.959722784" watchObservedRunningTime="2025-11-24 03:40:43.907233513 +0000 UTC m=+20.977779672"
	Nov 24 03:40:46 no-preload-262280 kubelet[2119]: I1124 03:40:46.073905    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f6bk\" (UniqueName: \"kubernetes.io/projected/820858e8-9815-41a7-a6c3-43bbfe947f4b-kube-api-access-2f6bk\") pod \"busybox\" (UID: \"820858e8-9815-41a7-a6c3-43bbfe947f4b\") " pod="default/busybox"
	
	
	==> storage-provisioner [be397b4afce8525a05276b3a7b1dc032772656b57834670e5afd4dcec6228318] <==
	I1124 03:40:42.767588       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:40:42.811362       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:40:42.811414       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:40:42.817106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:42.827456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:40:42.828066       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:40:42.828389       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23985941-51d1-473d-8dad-195f98b18f60", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-262280_b83dfcf7-d29d-4e4e-b161-2eb9414fe41e became leader
	I1124 03:40:42.828452       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-262280_b83dfcf7-d29d-4e4e-b161-2eb9414fe41e!
	W1124 03:40:42.842119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:42.848699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:40:42.943618       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-262280_b83dfcf7-d29d-4e4e-b161-2eb9414fe41e!
	W1124 03:40:44.852117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:44.859380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:46.862646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:46.867501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:48.871150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:48.876088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:50.880041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:50.885681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:52.889217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:52.899170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:54.903238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:54.908589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-262280 -n no-preload-262280
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-262280 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-262280
helpers_test.go:243: (dbg) docker inspect no-preload-262280:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43",
	        "Created": "2025-11-24T03:39:39.125759588Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 465758,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:39:39.222912836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43/hostname",
	        "HostsPath": "/var/lib/docker/containers/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43/hosts",
	        "LogPath": "/var/lib/docker/containers/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43/35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43-json.log",
	        "Name": "/no-preload-262280",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-262280:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-262280",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "35fb5533c8b0b5cb3a1f39f488c8a3808dfdde73bd56ee85ffeb7ede0a29bb43",
	                "LowerDir": "/var/lib/docker/overlay2/1a690ac398d6ea4279990c525ce2b1ce9b0be841ce796f32faa57c71d3bcc7c7-init/diff:/var/lib/docker/overlay2/11b197f530f0d571f61892814d8d4c774f7d3e5a97abdd8c5aa182cc99b2d856/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1a690ac398d6ea4279990c525ce2b1ce9b0be841ce796f32faa57c71d3bcc7c7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1a690ac398d6ea4279990c525ce2b1ce9b0be841ce796f32faa57c71d3bcc7c7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1a690ac398d6ea4279990c525ce2b1ce9b0be841ce796f32faa57c71d3bcc7c7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-262280",
	                "Source": "/var/lib/docker/volumes/no-preload-262280/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-262280",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-262280",
	                "name.minikube.sigs.k8s.io": "no-preload-262280",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "ff86b97e9af3f76433d01ded73b3a20157bafebde6fceb4cf4f1ef2d072b94c8",
	            "SandboxKey": "/var/run/docker/netns/ff86b97e9af3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-262280": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:f1:9d:47:19:ca",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c7a8069005652453f62a34c2e34a46a4f9e1a107e7ecc865b5e42d1b2ca7588f",
	                    "EndpointID": "2c04a89f2f320f71228617091bb8d81d0ca59d5a2ae2905b6fa3b657d1ab9b55",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-262280",
	                        "35fb5533c8b0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-262280 -n no-preload-262280
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-262280 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-262280 logs -n 25: (1.264666656s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p force-systemd-env-574539 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │                     │
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:35 UTC │
	│ delete  │ -p kubernetes-upgrade-850960                                                                                                                                                                                                                        │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ force-systemd-env-574539 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p force-systemd-env-574539                                                                                                                                                                                                                         │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-options-216763 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ cert-options-216763 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ -p cert-options-216763 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p cert-options-216763                                                                                                                                                                                                                              │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-098965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ stop    │ -p old-k8s-version-098965 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-098965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ image   │ old-k8s-version-098965 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ pause   │ -p old-k8s-version-098965 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ unpause │ -p old-k8s-version-098965 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p old-k8s-version-098965                                                                                                                                                                                                                           │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p old-k8s-version-098965                                                                                                                                                                                                                           │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p no-preload-262280 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-262280         │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p cert-expiration-846384                                                                                                                                                                                                                           │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-818836        │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:39:54
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:39:54.770134  468607 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:39:54.770765  468607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:54.770803  468607 out.go:374] Setting ErrFile to fd 2...
	I1124 03:39:54.770823  468607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:54.771173  468607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:39:54.771694  468607 out.go:368] Setting JSON to false
	I1124 03:39:54.772710  468607 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8523,"bootTime":1763947072,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:39:54.772814  468607 start.go:143] virtualization:  
	I1124 03:39:54.776844  468607 out.go:179] * [embed-certs-818836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:39:54.781644  468607 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:39:54.781732  468607 notify.go:221] Checking for updates...
	I1124 03:39:54.787053  468607 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:39:54.790493  468607 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:39:54.793844  468607 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:39:54.797082  468607 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:39:54.800233  468607 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:39:54.803908  468607 config.go:182] Loaded profile config "no-preload-262280": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:39:54.804064  468607 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:39:54.846350  468607 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:39:54.846478  468607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:39:54.943233  468607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-24 03:39:54.932926558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:39:54.943335  468607 docker.go:319] overlay module found
	I1124 03:39:54.946509  468607 out.go:179] * Using the docker driver based on user configuration
	I1124 03:39:54.950114  468607 start.go:309] selected driver: docker
	I1124 03:39:54.950133  468607 start.go:927] validating driver "docker" against <nil>
	I1124 03:39:54.950147  468607 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:39:54.950879  468607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:39:55.051907  468607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-24 03:39:55.038363177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:39:55.052067  468607 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:39:55.052307  468607 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:39:55.055713  468607 out.go:179] * Using Docker driver with root privileges
	I1124 03:39:55.058665  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:39:55.058771  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:39:55.058786  468607 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:39:55.058875  468607 start.go:353] cluster config:
	{Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:39:55.062215  468607 out.go:179] * Starting "embed-certs-818836" primary control-plane node in "embed-certs-818836" cluster
	I1124 03:39:55.065106  468607 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:39:55.068109  468607 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:39:55.071078  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:39:55.071139  468607 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 03:39:55.071152  468607 cache.go:65] Caching tarball of preloaded images
	I1124 03:39:55.071260  468607 preload.go:238] Found /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 03:39:55.071275  468607 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:39:55.071398  468607 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json ...
	I1124 03:39:55.071424  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json: {Name:mk937c632daa818953aa058a3473ebcd37b1b74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:39:55.071593  468607 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:39:55.094186  468607 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:39:55.094210  468607 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:39:55.094227  468607 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:39:55.094258  468607 start.go:360] acquireMachinesLock for embed-certs-818836: {Name:mk5ce88de168b198a494858bb8201276136df5bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:39:55.094377  468607 start.go:364] duration metric: took 97.543µs to acquireMachinesLock for "embed-certs-818836"
	I1124 03:39:55.094417  468607 start.go:93] Provisioning new machine with config: &{Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:39:55.094497  468607 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:39:53.821541  465459 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.603191329s)
	I1124 03:39:53.821565  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 03:39:53.821584  465459 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:53.821636  465459 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:57.814796  465459 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.993137445s)
	I1124 03:39:57.814820  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:39:57.814838  465459 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:39:57.814894  465459 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:39:55.099888  468607 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:39:55.100165  468607 start.go:159] libmachine.API.Create for "embed-certs-818836" (driver="docker")
	I1124 03:39:55.100219  468607 client.go:173] LocalClient.Create starting
	I1124 03:39:55.100327  468607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem
	I1124 03:39:55.100376  468607 main.go:143] libmachine: Decoding PEM data...
	I1124 03:39:55.100396  468607 main.go:143] libmachine: Parsing certificate...
	I1124 03:39:55.100448  468607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem
	I1124 03:39:55.100500  468607 main.go:143] libmachine: Decoding PEM data...
	I1124 03:39:55.100517  468607 main.go:143] libmachine: Parsing certificate...
	I1124 03:39:55.100910  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:39:55.125795  468607 cli_runner.go:211] docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:39:55.125884  468607 network_create.go:284] running [docker network inspect embed-certs-818836] to gather additional debugging logs...
	I1124 03:39:55.125914  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836
	W1124 03:39:55.143227  468607 cli_runner.go:211] docker network inspect embed-certs-818836 returned with exit code 1
	I1124 03:39:55.143261  468607 network_create.go:287] error running [docker network inspect embed-certs-818836]: docker network inspect embed-certs-818836: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-818836 not found
	I1124 03:39:55.143275  468607 network_create.go:289] output of [docker network inspect embed-certs-818836]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-818836 not found
	
	** /stderr **
	I1124 03:39:55.143372  468607 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:39:55.161548  468607 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-752aaa40bb3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:00:20:e4:71:15} reservation:<nil>}
	I1124 03:39:55.161924  468607 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbb0dee281db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:ff:07:3e:91:0f} reservation:<nil>}
	I1124 03:39:55.162178  468607 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d95ffec60547 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:b5:f2:ed:07:1e} reservation:<nil>}
	I1124 03:39:55.162624  468607 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2c210}
	I1124 03:39:55.162647  468607 network_create.go:124] attempt to create docker network embed-certs-818836 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 03:39:55.162703  468607 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-818836 embed-certs-818836
	I1124 03:39:55.225512  468607 network_create.go:108] docker network embed-certs-818836 192.168.76.0/24 created
	I1124 03:39:55.225548  468607 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-818836" container
	I1124 03:39:55.225630  468607 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:39:55.242034  468607 cli_runner.go:164] Run: docker volume create embed-certs-818836 --label name.minikube.sigs.k8s.io=embed-certs-818836 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:39:55.262160  468607 oci.go:103] Successfully created a docker volume embed-certs-818836
	I1124 03:39:55.262245  468607 cli_runner.go:164] Run: docker run --rm --name embed-certs-818836-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-818836 --entrypoint /usr/bin/test -v embed-certs-818836:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:39:56.023650  468607 oci.go:107] Successfully prepared a docker volume embed-certs-818836
	I1124 03:39:56.023728  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:39:56.023743  468607 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:39:56.023811  468607 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-818836:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:39:58.487593  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:39:58.487627  465459 cache_images.go:125] Successfully loaded all cached images
	I1124 03:39:58.487632  465459 cache_images.go:94] duration metric: took 15.116520084s to LoadCachedImages
	I1124 03:39:58.487645  465459 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1124 03:39:58.487737  465459 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-262280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-262280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:39:58.487802  465459 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:39:58.517432  465459 cni.go:84] Creating CNI manager for ""
	I1124 03:39:58.517454  465459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:39:58.517467  465459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:39:58.517491  465459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-262280 NodeName:no-preload-262280 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:39:58.517604  465459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-262280"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:39:58.517675  465459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:39:58.527708  465459 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:39:58.527826  465459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:39:58.537240  465459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1124 03:39:58.537336  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:39:58.538133  465459 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1124 03:39:58.538622  465459 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1124 03:39:58.544156  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:39:58.544188  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1124 03:39:59.579840  465459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:39:59.602240  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:39:59.612666  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:39:59.612754  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1124 03:39:59.686847  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:39:59.706955  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:39:59.707011  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1124 03:40:00.747521  465459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:00.765344  465459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1124 03:40:00.782659  465459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:00.799074  465459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1124 03:40:00.815268  465459 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:00.821044  465459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:00.834962  465459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:00.961773  465459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:00.983622  465459 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280 for IP: 192.168.85.2
	I1124 03:40:00.983698  465459 certs.go:195] generating shared ca certs ...
	I1124 03:40:00.983731  465459 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:00.983948  465459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:40:00.984027  465459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:40:00.984066  465459 certs.go:257] generating profile certs ...
	I1124 03:40:00.984149  465459 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key
	I1124 03:40:00.984190  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt with IP's: []
	I1124 03:40:01.602129  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt ...
	I1124 03:40:01.602164  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: {Name:mk5c809e6dd128dc33970522909ae40ed13851c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:01.602404  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key ...
	I1124 03:40:01.602420  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key: {Name:mk4c99883f96920c3d389a999045dde9f43e74fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:01.602523  465459 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859
	I1124 03:40:01.602540  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:40:02.066816  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 ...
	I1124 03:40:02.066899  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859: {Name:mkd9f7b00f0b8be089cbce37f7826610732080e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.067142  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859 ...
	I1124 03:40:02.067186  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859: {Name:mkaaed6b4175e7a41645d8c3454f2c44a0203858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.067372  465459 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt
	I1124 03:40:02.067467  465459 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key
	I1124 03:40:02.067543  465459 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key
	I1124 03:40:02.067564  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt with IP's: []
	I1124 03:40:02.465004  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt ...
	I1124 03:40:02.465036  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt: {Name:mkf027bf4f367183ad961bb9001139254f6258cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.465206  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key ...
	I1124 03:40:02.465221  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key: {Name:mk8915392d44290b2ab552251edca0730df8ed0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.465611  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:40:02.465663  465459 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:02.465681  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:40:02.465712  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:02.465746  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:02.465775  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:02.465824  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:02.466427  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:02.490422  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:40:02.538618  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:02.580031  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:02.623593  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:40:02.657524  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:02.687220  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:02.710371  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:02.732274  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:40:02.755007  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:40:02.777653  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:02.805037  465459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:02.826328  465459 ssh_runner.go:195] Run: openssl version
	I1124 03:40:02.842808  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:40:02.861247  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.869101  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.869168  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.973780  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:02.983869  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:40:03.003344  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.014606  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.014678  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.100872  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:03.119219  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:03.132707  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.143890  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.143956  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.227580  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:03.241329  465459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:03.250558  465459 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:40:03.250662  465459 kubeadm.go:401] StartCluster: {Name:no-preload-262280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-262280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:03.250758  465459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:03.250841  465459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:03.389740  465459 cri.go:89] found id: ""
	I1124 03:40:03.389818  465459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:03.413175  465459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:03.434949  465459 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:40:03.435019  465459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:03.450572  465459 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:03.450591  465459 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:03.450643  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:03.481203  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:03.481293  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:03.505063  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:03.526828  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:03.526899  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:03.542273  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:03.554380  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:03.554459  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:03.565133  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:03.583655  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:03.583761  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:03.600101  465459 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:40:03.695740  465459 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:40:03.695802  465459 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:40:03.729178  465459 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:40:03.729476  465459 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:40:03.729518  465459 kubeadm.go:319] OS: Linux
	I1124 03:40:03.729563  465459 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:40:03.729611  465459 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:40:03.729658  465459 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:40:03.729710  465459 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:40:03.729759  465459 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:40:03.729806  465459 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:40:03.729851  465459 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:40:03.729911  465459 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:40:03.729958  465459 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:40:03.847775  465459 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:40:03.847886  465459 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:40:03.847977  465459 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:40:03.860909  465459 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:40:02.325904  468607 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-818836:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (6.302044362s)
	I1124 03:40:02.325939  468607 kic.go:203] duration metric: took 6.302193098s to extract preloaded images to volume ...
	W1124 03:40:02.326078  468607 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:40:02.326190  468607 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:40:02.445610  468607 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-818836 --name embed-certs-818836 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-818836 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-818836 --network embed-certs-818836 --ip 192.168.76.2 --volume embed-certs-818836:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:40:02.830161  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Running}}
	I1124 03:40:02.858743  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:02.883367  468607 cli_runner.go:164] Run: docker exec embed-certs-818836 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:40:02.940884  468607 oci.go:144] the created container "embed-certs-818836" has a running status.
	I1124 03:40:02.940913  468607 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa...
	I1124 03:40:03.398411  468607 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:40:03.429853  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:03.464067  468607 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:40:03.464088  468607 kic_runner.go:114] Args: [docker exec --privileged embed-certs-818836 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:40:03.540196  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:03.576062  468607 machine.go:94] provisionDockerMachine start ...
	I1124 03:40:03.576168  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:03.596498  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.597706  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:03.597742  468607 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:40:03.598783  468607 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 03:40:03.865701  465459 out.go:252]   - Generating certificates and keys ...
	I1124 03:40:03.865794  465459 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:40:03.865861  465459 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:40:04.261018  465459 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:40:04.423750  465459 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:40:04.784877  465459 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:40:05.469508  465459 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:40:05.670184  465459 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:40:05.670529  465459 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-262280] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:40:05.916276  465459 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:40:05.916671  465459 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-262280] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:40:06.295195  465459 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:40:06.703517  465459 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:40:07.221344  465459 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:40:07.221867  465459 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:40:06.756947  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-818836
	
	I1124 03:40:06.757024  468607 ubuntu.go:182] provisioning hostname "embed-certs-818836"
	I1124 03:40:06.757117  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:06.780855  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:06.781159  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:06.781170  468607 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-818836 && echo "embed-certs-818836" | sudo tee /etc/hostname
	I1124 03:40:06.952924  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-818836
	
	I1124 03:40:06.953068  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:06.976988  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:06.977313  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:06.977329  468607 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-818836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-818836/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-818836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:40:07.145464  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:40:07.145556  468607 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-255205/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-255205/.minikube}
	I1124 03:40:07.145614  468607 ubuntu.go:190] setting up certificates
	I1124 03:40:07.145642  468607 provision.go:84] configureAuth start
	I1124 03:40:07.145739  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.169212  468607 provision.go:143] copyHostCerts
	I1124 03:40:07.169290  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem, removing ...
	I1124 03:40:07.169299  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem
	I1124 03:40:07.169376  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem (1078 bytes)
	I1124 03:40:07.169475  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem, removing ...
	I1124 03:40:07.169480  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem
	I1124 03:40:07.169506  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem (1123 bytes)
	I1124 03:40:07.169572  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem, removing ...
	I1124 03:40:07.169578  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem
	I1124 03:40:07.169604  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem (1675 bytes)
	I1124 03:40:07.169661  468607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem org=jenkins.embed-certs-818836 san=[127.0.0.1 192.168.76.2 embed-certs-818836 localhost minikube]
	I1124 03:40:07.418050  468607 provision.go:177] copyRemoteCerts
	I1124 03:40:07.418164  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:40:07.418250  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.436857  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.541668  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:40:07.562105  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:40:07.582528  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:40:07.603626  468607 provision.go:87] duration metric: took 457.949417ms to configureAuth
	I1124 03:40:07.603697  468607 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:40:07.603915  468607 config.go:182] Loaded profile config "embed-certs-818836": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:07.603945  468607 machine.go:97] duration metric: took 4.027864554s to provisionDockerMachine
	I1124 03:40:07.603968  468607 client.go:176] duration metric: took 12.503739627s to LocalClient.Create
	I1124 03:40:07.603998  468607 start.go:167] duration metric: took 12.503833413s to libmachine.API.Create "embed-certs-818836"
	I1124 03:40:07.604072  468607 start.go:293] postStartSetup for "embed-certs-818836" (driver="docker")
	I1124 03:40:07.604107  468607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:40:07.604203  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:40:07.604265  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.632600  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.737983  468607 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:40:07.742314  468607 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:40:07.742341  468607 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:40:07.742353  468607 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/addons for local assets ...
	I1124 03:40:07.742407  468607 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/files for local assets ...
	I1124 03:40:07.742485  468607 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem -> 2570692.pem in /etc/ssl/certs
	I1124 03:40:07.742591  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:40:07.751254  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:07.775588  468607 start.go:296] duration metric: took 171.476748ms for postStartSetup
	I1124 03:40:07.776070  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.810247  468607 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json ...
	I1124 03:40:07.810536  468607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:40:07.810584  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.829698  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.934319  468607 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:40:07.940379  468607 start.go:128] duration metric: took 12.845864213s to createHost
	I1124 03:40:07.940407  468607 start.go:83] releasing machines lock for "embed-certs-818836", held for 12.84601335s
	I1124 03:40:07.940518  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.966549  468607 ssh_runner.go:195] Run: cat /version.json
	I1124 03:40:07.966614  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.966858  468607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:40:07.966916  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:08.009694  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:08.010496  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:08.140825  468607 ssh_runner.go:195] Run: systemctl --version
	I1124 03:40:08.236306  468607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:40:08.241952  468607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:40:08.242033  468607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:40:08.275925  468607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:40:08.276006  468607 start.go:496] detecting cgroup driver to use...
	I1124 03:40:08.276054  468607 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:40:08.276163  468607 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:40:08.293354  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:40:08.309121  468607 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:40:08.309273  468607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:40:08.329161  468607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:40:08.349309  468607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:40:08.512169  468607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:40:08.692876  468607 docker.go:234] disabling docker service ...
	I1124 03:40:08.692943  468607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:40:08.722865  468607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:40:08.738391  468607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:40:08.914395  468607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:40:09.078224  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:40:09.099626  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:40:09.127201  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:40:09.137475  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:40:09.151390  468607 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 03:40:09.151466  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 03:40:09.161530  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:40:09.179218  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:40:09.188732  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:40:09.198154  468607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:40:09.206565  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:40:09.215833  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:40:09.225156  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:40:09.234765  468607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:40:09.243300  468607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:40:09.251671  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:09.434190  468607 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:40:09.629101  468607 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:40:09.629177  468607 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:40:09.633574  468607 start.go:564] Will wait 60s for crictl version
	I1124 03:40:09.633686  468607 ssh_runner.go:195] Run: which crictl
	I1124 03:40:09.637799  468607 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:40:09.680020  468607 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:40:09.680112  468607 ssh_runner.go:195] Run: containerd --version
	I1124 03:40:09.701052  468607 ssh_runner.go:195] Run: containerd --version
	I1124 03:40:09.728551  468607 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:40:09.731602  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:40:09.752927  468607 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:40:09.757138  468607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:09.767237  468607 kubeadm.go:884] updating cluster {Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:40:09.767356  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:40:09.767434  468607 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:07.945073  465459 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:40:08.356082  465459 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:40:08.704960  465459 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:40:09.943963  465459 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:40:10.216943  465459 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:40:10.218580  465459 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:40:10.237543  465459 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:40:09.801793  468607 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:40:09.801818  468607 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:40:09.801887  468607 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:09.828434  468607 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:40:09.828460  468607 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:40:09.828491  468607 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 03:40:09.828596  468607 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-818836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:40:09.828666  468607 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:40:09.855719  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:40:09.855746  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:09.855754  468607 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:40:09.855777  468607 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-818836 NodeName:embed-certs-818836 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:40:09.855896  468607 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-818836"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:40:09.855970  468607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:40:09.864082  468607 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:40:09.864155  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:09.871799  468607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 03:40:09.885236  468607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:09.903151  468607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1124 03:40:09.916330  468607 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:09.920755  468607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:09.930245  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:10.095373  468607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:10.120719  468607 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836 for IP: 192.168.76.2
	I1124 03:40:10.120751  468607 certs.go:195] generating shared ca certs ...
	I1124 03:40:10.120775  468607 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.120926  468607 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:40:10.121022  468607 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:40:10.121036  468607 certs.go:257] generating profile certs ...
	I1124 03:40:10.121101  468607 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key
	I1124 03:40:10.121117  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt with IP's: []
	I1124 03:40:10.420574  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt ...
	I1124 03:40:10.420618  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt: {Name:mk242703eac12cbe34e4028bdd5925f7440b86e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.420945  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key ...
	I1124 03:40:10.420962  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key: {Name:mk4f7dbe6cf87f427019f2b9bb878908f82573e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.421164  468607 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253
	I1124 03:40:10.421185  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:40:10.579421  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 ...
	I1124 03:40:10.579459  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253: {Name:mk072dbea8dc92562bf332b98a65b57fa9581398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.579707  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253 ...
	I1124 03:40:10.579733  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253: {Name:mk3986530288979c5c9a2178817e35e45248f3c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.579920  468607 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt
	I1124 03:40:10.580110  468607 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key
	I1124 03:40:10.580235  468607 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key
	I1124 03:40:10.580282  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt with IP's: []
	I1124 03:40:10.650382  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt ...
	I1124 03:40:10.650422  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt: {Name:mk7002a63ade6dd6830536f0b45108488d8d2647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.650709  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key ...
	I1124 03:40:10.650730  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key: {Name:mk9ed88761ece5843396144a4fbfafba4af7e713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.651036  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:40:10.651117  468607 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:10.651134  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:40:10.651185  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:10.651246  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:10.651301  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:10.651375  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:10.652050  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:10.674232  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:40:10.698101  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:10.717381  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:10.737149  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:40:10.761648  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:10.786481  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:10.807220  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:10.827613  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:10.849625  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:40:10.870797  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:40:10.892331  468607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:10.908461  468607 ssh_runner.go:195] Run: openssl version
	I1124 03:40:10.916101  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:40:10.926608  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.931358  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.931455  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.976219  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:10.986375  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:10.996391  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.017389  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.017511  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.093548  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:11.109631  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:40:11.122383  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.127328  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.127425  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.171896  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:11.181990  468607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:11.186817  468607 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:40:11.186902  468607 kubeadm.go:401] StartCluster: {Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:11.187015  468607 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:11.187107  468607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:11.229657  468607 cri.go:89] found id: ""
	I1124 03:40:11.229767  468607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:11.239862  468607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:11.249588  468607 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:40:11.249708  468607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:11.261397  468607 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:11.261464  468607 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:11.261537  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:11.271489  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:11.271603  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:11.282245  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:11.295430  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:11.295544  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:11.303936  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:11.314965  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:11.315086  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:11.322532  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:11.331297  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:11.331410  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:11.339587  468607 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:40:11.388094  468607 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:40:11.388694  468607 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:40:11.418975  468607 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:40:11.419097  468607 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:40:11.419162  468607 kubeadm.go:319] OS: Linux
	I1124 03:40:11.419229  468607 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:40:11.419310  468607 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:40:11.419397  468607 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:40:11.419482  468607 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:40:11.419545  468607 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:40:11.419609  468607 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:40:11.419672  468607 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:40:11.419733  468607 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:40:11.419793  468607 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:40:11.498745  468607 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:40:11.498892  468607 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:40:11.499019  468607 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:40:11.505807  468607 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:40:10.241345  465459 out.go:252]   - Booting up control plane ...
	I1124 03:40:10.241455  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:40:10.245314  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:40:10.248607  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:40:10.281242  465459 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:40:10.281374  465459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:40:10.290260  465459 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:40:10.290359  465459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:40:10.290400  465459 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:40:10.449824  465459 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:40:10.450005  465459 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:40:11.952880  465459 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500833117s
	I1124 03:40:11.954116  465459 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:40:11.954483  465459 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:40:11.954823  465459 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:40:11.955791  465459 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:40:11.512278  468607 out.go:252]   - Generating certificates and keys ...
	I1124 03:40:11.512384  468607 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:40:11.512475  468607 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:40:12.156551  468607 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:40:12.440381  468607 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:40:13.054828  468607 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:40:14.412107  468607 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:40:17.439040  465459 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.482829056s
	I1124 03:40:14.824196  468607 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:40:14.824831  468607 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-818836 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:40:15.040863  468607 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:40:15.040998  468607 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-818836 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:40:15.376085  468607 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:40:15.719552  468607 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:40:16.788559  468607 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:40:16.789083  468607 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:40:17.179360  468607 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:40:17.589911  468607 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:40:18.716938  468607 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:40:19.434256  468607 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:40:19.598171  468607 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:40:19.599352  468607 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:40:19.612523  468607 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:40:19.615809  468607 out.go:252]   - Booting up control plane ...
	I1124 03:40:19.615923  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:40:19.616002  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:40:19.616070  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:40:19.643244  468607 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:40:19.643372  468607 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:40:19.651919  468607 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:40:19.660667  468607 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:40:19.661493  468607 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:40:20.959069  465459 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.003836426s
	I1124 03:40:22.125067  465459 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.16861254s
	I1124 03:40:22.188271  465459 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:40:22.216515  465459 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:40:22.258578  465459 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:40:22.259036  465459 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-262280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:40:22.271087  465459 kubeadm.go:319] [bootstrap-token] Using token: 2yptao.r7yd6l7ev1yowcqn
	I1124 03:40:22.274016  465459 out.go:252]   - Configuring RBAC rules ...
	I1124 03:40:22.274139  465459 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:40:22.285868  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:40:22.302245  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:40:22.309475  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:40:22.314669  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:40:22.324840  465459 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:40:22.533610  465459 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:40:22.993832  465459 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:40:23.539106  465459 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:40:23.540728  465459 kubeadm.go:319] 
	I1124 03:40:23.540809  465459 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:40:23.540814  465459 kubeadm.go:319] 
	I1124 03:40:23.540891  465459 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:40:23.540895  465459 kubeadm.go:319] 
	I1124 03:40:23.540920  465459 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:40:23.541365  465459 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:40:23.541428  465459 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:40:23.541434  465459 kubeadm.go:319] 
	I1124 03:40:23.541487  465459 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:40:23.541491  465459 kubeadm.go:319] 
	I1124 03:40:23.541539  465459 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:40:23.541542  465459 kubeadm.go:319] 
	I1124 03:40:23.541594  465459 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:40:23.541669  465459 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:40:23.541737  465459 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:40:23.541741  465459 kubeadm.go:319] 
	I1124 03:40:23.542069  465459 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:40:23.542155  465459 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:40:23.542159  465459 kubeadm.go:319] 
	I1124 03:40:23.542500  465459 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2yptao.r7yd6l7ev1yowcqn \
	I1124 03:40:23.542614  465459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:40:23.542853  465459 kubeadm.go:319] 	--control-plane 
	I1124 03:40:23.542871  465459 kubeadm.go:319] 
	I1124 03:40:23.543221  465459 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:40:23.543231  465459 kubeadm.go:319] 
	I1124 03:40:23.547828  465459 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2yptao.r7yd6l7ev1yowcqn \
	I1124 03:40:23.550982  465459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:40:23.555511  465459 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:40:23.555736  465459 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:40:23.555841  465459 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:40:23.555857  465459 cni.go:84] Creating CNI manager for ""
	I1124 03:40:23.555865  465459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:23.559067  465459 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:40:19.836180  468607 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:40:19.836307  468607 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:40:20.837911  468607 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001791556s
	I1124 03:40:20.841824  468607 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:40:20.841924  468607 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 03:40:20.842025  468607 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:40:20.842109  468607 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:40:23.561962  465459 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:40:23.570649  465459 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:40:23.570666  465459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:40:23.611043  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:40:24.448553  465459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:40:24.448680  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:24.448750  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-262280 minikube.k8s.io/updated_at=2025_11_24T03_40_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-262280 minikube.k8s.io/primary=true
	I1124 03:40:25.025787  465459 ops.go:34] apiserver oom_adj: -16
	I1124 03:40:25.025937  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:25.526394  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:26.025997  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:26.526754  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:27.026641  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:27.253055  465459 kubeadm.go:1114] duration metric: took 2.804418537s to wait for elevateKubeSystemPrivileges
	I1124 03:40:27.253082  465459 kubeadm.go:403] duration metric: took 24.002425527s to StartCluster
	I1124 03:40:27.253101  465459 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:27.253165  465459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:40:27.253834  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:27.254034  465459 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:40:27.254180  465459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:40:27.254424  465459 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:40:27.254486  465459 addons.go:70] Setting storage-provisioner=true in profile "no-preload-262280"
	I1124 03:40:27.254500  465459 addons.go:239] Setting addon storage-provisioner=true in "no-preload-262280"
	I1124 03:40:27.254522  465459 host.go:66] Checking if "no-preload-262280" exists ...
	I1124 03:40:27.255029  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.255348  465459 config.go:182] Loaded profile config "no-preload-262280": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:27.255425  465459 addons.go:70] Setting default-storageclass=true in profile "no-preload-262280"
	I1124 03:40:27.255459  465459 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-262280"
	I1124 03:40:27.255742  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.258534  465459 out.go:179] * Verifying Kubernetes components...
	I1124 03:40:27.264721  465459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:27.290687  465459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:40:27.293638  465459 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:27.293665  465459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:40:27.293734  465459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-262280
	I1124 03:40:27.295179  465459 addons.go:239] Setting addon default-storageclass=true in "no-preload-262280"
	I1124 03:40:27.295223  465459 host.go:66] Checking if "no-preload-262280" exists ...
	I1124 03:40:27.295646  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.333873  465459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/no-preload-262280/id_rsa Username:docker}
	I1124 03:40:27.342194  465459 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:27.342217  465459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:40:27.342282  465459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-262280
	I1124 03:40:27.369752  465459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/no-preload-262280/id_rsa Username:docker}
	I1124 03:40:28.289510  468607 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.446711872s
	I1124 03:40:28.718064  468607 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.876138727s
	I1124 03:40:28.086729  465459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:28.166898  465459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:40:28.167031  465459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:28.202605  465459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:29.603255  465459 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.436193485s)
	I1124 03:40:29.604024  465459 node_ready.go:35] waiting up to 6m0s for node "no-preload-262280" to be "Ready" ...
	I1124 03:40:29.604243  465459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.437316052s)
	I1124 03:40:29.604267  465459 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:40:30.149139  465459 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-262280" context rescaled to 1 replicas
	I1124 03:40:30.266899  465459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.064217856s)
	I1124 03:40:30.272444  465459 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 03:40:30.843974  468607 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002059314s
	I1124 03:40:30.870609  468607 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:40:30.901638  468607 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:40:30.924179  468607 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:40:30.924719  468607 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-818836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:40:30.940184  468607 kubeadm.go:319] [bootstrap-token] Using token: 0bimeo.bzidkyv9i8e7nkw3
	I1124 03:40:30.943266  468607 out.go:252]   - Configuring RBAC rules ...
	I1124 03:40:30.943387  468607 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:40:30.951610  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:40:30.963677  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:40:30.971959  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:40:30.977923  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:40:30.986249  468607 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:40:31.251471  468607 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:40:31.778202  468607 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:40:32.251684  468607 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:40:32.253477  468607 kubeadm.go:319] 
	I1124 03:40:32.253550  468607 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:40:32.253555  468607 kubeadm.go:319] 
	I1124 03:40:32.253632  468607 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:40:32.253637  468607 kubeadm.go:319] 
	I1124 03:40:32.253662  468607 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:40:32.254164  468607 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:40:32.254227  468607 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:40:32.254231  468607 kubeadm.go:319] 
	I1124 03:40:32.254285  468607 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:40:32.254288  468607 kubeadm.go:319] 
	I1124 03:40:32.254336  468607 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:40:32.254339  468607 kubeadm.go:319] 
	I1124 03:40:32.254391  468607 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:40:32.254466  468607 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:40:32.254534  468607 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:40:32.254538  468607 kubeadm.go:319] 
	I1124 03:40:32.254839  468607 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:40:32.254921  468607 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:40:32.254928  468607 kubeadm.go:319] 
	I1124 03:40:32.255259  468607 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0bimeo.bzidkyv9i8e7nkw3 \
	I1124 03:40:32.255368  468607 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:40:32.255600  468607 kubeadm.go:319] 	--control-plane 
	I1124 03:40:32.255610  468607 kubeadm.go:319] 
	I1124 03:40:32.255896  468607 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:40:32.255905  468607 kubeadm.go:319] 
	I1124 03:40:32.256198  468607 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0bimeo.bzidkyv9i8e7nkw3 \
	I1124 03:40:32.256558  468607 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:40:32.262002  468607 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:40:32.262227  468607 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:40:32.262331  468607 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:40:32.262347  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:40:32.262355  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:32.265575  468607 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:40:30.275374  465459 addons.go:530] duration metric: took 3.020937085s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1124 03:40:31.607716  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:32.268802  468607 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:40:32.276058  468607 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:40:32.276076  468607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:40:32.304040  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:40:32.950060  468607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:40:32.950194  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:32.950260  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-818836 minikube.k8s.io/updated_at=2025_11_24T03_40_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-818836 minikube.k8s.io/primary=true
	I1124 03:40:33.247296  468607 ops.go:34] apiserver oom_adj: -16
	I1124 03:40:33.247413  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:33.747810  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:34.247563  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:34.747727  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:35.248529  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:35.747874  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:36.248065  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:36.747517  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:37.248357  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:37.375914  468607 kubeadm.go:1114] duration metric: took 4.425764478s to wait for elevateKubeSystemPrivileges
	I1124 03:40:37.375948  468607 kubeadm.go:403] duration metric: took 26.189049705s to StartCluster
	I1124 03:40:37.375965  468607 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:37.376029  468607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:40:37.377428  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:37.377669  468607 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:40:37.377785  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:40:37.378042  468607 config.go:182] Loaded profile config "embed-certs-818836": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:37.378089  468607 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:40:37.378159  468607 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-818836"
	I1124 03:40:37.378172  468607 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-818836"
	I1124 03:40:37.378198  468607 host.go:66] Checking if "embed-certs-818836" exists ...
	I1124 03:40:37.378697  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.378976  468607 addons.go:70] Setting default-storageclass=true in profile "embed-certs-818836"
	I1124 03:40:37.379003  468607 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-818836"
	I1124 03:40:37.379254  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.381419  468607 out.go:179] * Verifying Kubernetes components...
	I1124 03:40:37.384428  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:37.421715  468607 addons.go:239] Setting addon default-storageclass=true in "embed-certs-818836"
	I1124 03:40:37.421763  468607 host.go:66] Checking if "embed-certs-818836" exists ...
	I1124 03:40:37.422190  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.443094  468607 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:40:34.107205  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	W1124 03:40:36.107495  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:37.445972  468607 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:37.445995  468607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:40:37.446062  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:37.468083  468607 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:37.468107  468607 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:40:37.468173  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:37.505843  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:37.512810  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:37.807453  468607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:37.824901  468607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:37.825083  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:40:37.844459  468607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:38.592240  468607 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 03:40:38.594605  468607 node_ready.go:35] waiting up to 6m0s for node "embed-certs-818836" to be "Ready" ...
	I1124 03:40:38.651892  468607 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:40:38.655002  468607 addons.go:530] duration metric: took 1.276905995s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:40:39.096916  468607 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-818836" context rescaled to 1 replicas
	W1124 03:40:38.606995  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	W1124 03:40:40.607344  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:42.608225  465459 node_ready.go:49] node "no-preload-262280" is "Ready"
	I1124 03:40:42.608272  465459 node_ready.go:38] duration metric: took 13.004210314s for node "no-preload-262280" to be "Ready" ...
	I1124 03:40:42.608287  465459 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:40:42.608350  465459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:42.623406  465459 api_server.go:72] duration metric: took 15.369343221s to wait for apiserver process to appear ...
	I1124 03:40:42.623436  465459 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:40:42.623469  465459 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:40:42.633313  465459 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:40:42.634411  465459 api_server.go:141] control plane version: v1.34.1
	I1124 03:40:42.634433  465459 api_server.go:131] duration metric: took 10.990663ms to wait for apiserver health ...
	I1124 03:40:42.634442  465459 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:40:42.638347  465459 system_pods.go:59] 8 kube-system pods found
	I1124 03:40:42.638381  465459 system_pods.go:61] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.638387  465459 system_pods.go:61] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.638392  465459 system_pods.go:61] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.638396  465459 system_pods.go:61] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.638401  465459 system_pods.go:61] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.638404  465459 system_pods.go:61] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.638407  465459 system_pods.go:61] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.638413  465459 system_pods.go:61] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:42.638420  465459 system_pods.go:74] duration metric: took 3.972643ms to wait for pod list to return data ...
	I1124 03:40:42.638431  465459 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:40:42.641761  465459 default_sa.go:45] found service account: "default"
	I1124 03:40:42.641824  465459 default_sa.go:55] duration metric: took 3.386704ms for default service account to be created ...
	I1124 03:40:42.641868  465459 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:40:42.645101  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:42.645134  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.645141  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.645147  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.645155  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.645160  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.645164  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.645168  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.645173  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:42.645193  465459 retry.go:31] will retry after 242.077653ms: missing components: kube-dns
	I1124 03:40:42.893628  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:42.893678  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.893684  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.893699  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.893704  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.893709  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.893713  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.893716  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.893720  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:42.893822  465459 retry.go:31] will retry after 373.532935ms: missing components: kube-dns
	W1124 03:40:40.597355  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:42.597817  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:44.598213  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	I1124 03:40:43.271122  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:43.271161  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:43.271172  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:43.271178  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:43.271182  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:43.271187  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:43.271191  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:43.271195  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:43.271206  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:43.271221  465459 retry.go:31] will retry after 322.6325ms: missing components: kube-dns
	I1124 03:40:43.599918  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:43.600007  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:43.600023  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:43.600030  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:43.600035  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:43.600040  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:43.600044  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:43.600048  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:43.600051  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:43.600066  465459 retry.go:31] will retry after 394.949668ms: missing components: kube-dns
	I1124 03:40:44.001892  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:44.001938  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Running
	I1124 03:40:44.001946  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:44.001952  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:44.001960  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:44.001965  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:44.001968  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:44.001972  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:44.001976  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:44.001989  465459 system_pods.go:126] duration metric: took 1.36009666s to wait for k8s-apps to be running ...
	I1124 03:40:44.001998  465459 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:40:44.002065  465459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:40:44.023562  465459 system_svc.go:56] duration metric: took 21.553336ms WaitForService to wait for kubelet
	I1124 03:40:44.023598  465459 kubeadm.go:587] duration metric: took 16.769539879s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:40:44.023618  465459 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:40:44.027009  465459 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:40:44.027046  465459 node_conditions.go:123] node cpu capacity is 2
	I1124 03:40:44.027060  465459 node_conditions.go:105] duration metric: took 3.437042ms to run NodePressure ...
	I1124 03:40:44.027074  465459 start.go:242] waiting for startup goroutines ...
	I1124 03:40:44.027110  465459 start.go:247] waiting for cluster config update ...
	I1124 03:40:44.027129  465459 start.go:256] writing updated cluster config ...
	I1124 03:40:44.027439  465459 ssh_runner.go:195] Run: rm -f paused
	I1124 03:40:44.032809  465459 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:44.036889  465459 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mj9gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.042142  465459 pod_ready.go:94] pod "coredns-66bc5c9577-mj9gd" is "Ready"
	I1124 03:40:44.042172  465459 pod_ready.go:86] duration metric: took 5.207096ms for pod "coredns-66bc5c9577-mj9gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.044894  465459 pod_ready.go:83] waiting for pod "etcd-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.050138  465459 pod_ready.go:94] pod "etcd-no-preload-262280" is "Ready"
	I1124 03:40:44.050222  465459 pod_ready.go:86] duration metric: took 5.300135ms for pod "etcd-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.052994  465459 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.057831  465459 pod_ready.go:94] pod "kube-apiserver-no-preload-262280" is "Ready"
	I1124 03:40:44.057868  465459 pod_ready.go:86] duration metric: took 4.8387ms for pod "kube-apiserver-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.060783  465459 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.437093  465459 pod_ready.go:94] pod "kube-controller-manager-no-preload-262280" is "Ready"
	I1124 03:40:44.437124  465459 pod_ready.go:86] duration metric: took 376.313274ms for pod "kube-controller-manager-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.637747  465459 pod_ready.go:83] waiting for pod "kube-proxy-xg8w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.042982  465459 pod_ready.go:94] pod "kube-proxy-xg8w4" is "Ready"
	I1124 03:40:45.043021  465459 pod_ready.go:86] duration metric: took 405.246191ms for pod "kube-proxy-xg8w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.238605  465459 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.636771  465459 pod_ready.go:94] pod "kube-scheduler-no-preload-262280" is "Ready"
	I1124 03:40:45.636842  465459 pod_ready.go:86] duration metric: took 398.208005ms for pod "kube-scheduler-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.636877  465459 pod_ready.go:40] duration metric: took 1.604024878s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:45.700045  465459 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:40:45.703311  465459 out.go:179] * Done! kubectl is now configured to use "no-preload-262280" cluster and "default" namespace by default
	W1124 03:40:47.097978  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:49.098467  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	I1124 03:40:49.600289  468607 node_ready.go:49] node "embed-certs-818836" is "Ready"
	I1124 03:40:49.600325  468607 node_ready.go:38] duration metric: took 11.005685237s for node "embed-certs-818836" to be "Ready" ...
	I1124 03:40:49.600342  468607 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:40:49.600401  468607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:49.616102  468607 api_server.go:72] duration metric: took 12.238396901s to wait for apiserver process to appear ...
	I1124 03:40:49.616131  468607 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:40:49.616151  468607 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:40:49.625663  468607 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 03:40:49.628248  468607 api_server.go:141] control plane version: v1.34.1
	I1124 03:40:49.628298  468607 api_server.go:131] duration metric: took 12.158646ms to wait for apiserver health ...
	I1124 03:40:49.628308  468607 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:40:49.635456  468607 system_pods.go:59] 8 kube-system pods found
	I1124 03:40:49.635501  468607 system_pods.go:61] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.635509  468607 system_pods.go:61] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.635527  468607 system_pods.go:61] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.635531  468607 system_pods.go:61] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.635536  468607 system_pods.go:61] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.635542  468607 system_pods.go:61] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.635546  468607 system_pods.go:61] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.635559  468607 system_pods.go:61] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.635566  468607 system_pods.go:74] duration metric: took 7.25158ms to wait for pod list to return data ...
	I1124 03:40:49.635579  468607 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:40:49.639861  468607 default_sa.go:45] found service account: "default"
	I1124 03:40:49.639903  468607 default_sa.go:55] duration metric: took 4.317754ms for default service account to be created ...
	I1124 03:40:49.639914  468607 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:40:49.642908  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:49.642943  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.642950  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.642956  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.642961  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.642975  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.642979  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.642984  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.642992  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.643018  468607 retry.go:31] will retry after 271.674831ms: missing components: kube-dns
	I1124 03:40:49.919376  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:49.919415  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.919423  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.919429  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.919435  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.919440  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.919444  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.919448  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.919455  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.919474  468607 retry.go:31] will retry after 335.268613ms: missing components: kube-dns
	I1124 03:40:50.262160  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:50.262218  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:50.262226  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:50.262264  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:50.262281  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:50.262290  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:50.262298  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:50.262302  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:50.262312  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:50.262349  468607 retry.go:31] will retry after 385.617551ms: missing components: kube-dns
	I1124 03:40:50.651970  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:50.652010  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:50.652018  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:50.652025  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:50.652030  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:50.652034  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:50.652038  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:50.652041  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:50.652047  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:50.652064  468607 retry.go:31] will retry after 470.580451ms: missing components: kube-dns
	I1124 03:40:51.133462  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:51.133497  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Running
	I1124 03:40:51.133504  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:51.133509  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:51.133514  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:51.133518  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:51.133528  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:51.133533  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:51.133538  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Running
	I1124 03:40:51.133558  468607 system_pods.go:126] duration metric: took 1.493636996s to wait for k8s-apps to be running ...
	I1124 03:40:51.133566  468607 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:40:51.133625  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:40:51.151193  468607 system_svc.go:56] duration metric: took 17.617707ms WaitForService to wait for kubelet
	I1124 03:40:51.151222  468607 kubeadm.go:587] duration metric: took 13.773521156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:40:51.151242  468607 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:40:51.158998  468607 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:40:51.159035  468607 node_conditions.go:123] node cpu capacity is 2
	I1124 03:40:51.159163  468607 node_conditions.go:105] duration metric: took 7.914387ms to run NodePressure ...
	I1124 03:40:51.159180  468607 start.go:242] waiting for startup goroutines ...
	I1124 03:40:51.159201  468607 start.go:247] waiting for cluster config update ...
	I1124 03:40:51.159225  468607 start.go:256] writing updated cluster config ...
	I1124 03:40:51.159566  468607 ssh_runner.go:195] Run: rm -f paused
	I1124 03:40:51.163938  468607 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:51.233364  468607 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dgvvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.238633  468607 pod_ready.go:94] pod "coredns-66bc5c9577-dgvvg" is "Ready"
	I1124 03:40:51.238668  468607 pod_ready.go:86] duration metric: took 5.226756ms for pod "coredns-66bc5c9577-dgvvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.242048  468607 pod_ready.go:83] waiting for pod "etcd-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.247506  468607 pod_ready.go:94] pod "etcd-embed-certs-818836" is "Ready"
	I1124 03:40:51.247534  468607 pod_ready.go:86] duration metric: took 5.457921ms for pod "etcd-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.250505  468607 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.256168  468607 pod_ready.go:94] pod "kube-apiserver-embed-certs-818836" is "Ready"
	I1124 03:40:51.256200  468607 pod_ready.go:86] duration metric: took 5.665265ms for pod "kube-apiserver-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.258827  468607 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.568969  468607 pod_ready.go:94] pod "kube-controller-manager-embed-certs-818836" is "Ready"
	I1124 03:40:51.568996  468607 pod_ready.go:86] duration metric: took 310.144443ms for pod "kube-controller-manager-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.768346  468607 pod_ready.go:83] waiting for pod "kube-proxy-kqtwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.168601  468607 pod_ready.go:94] pod "kube-proxy-kqtwg" is "Ready"
	I1124 03:40:52.168630  468607 pod_ready.go:86] duration metric: took 400.250484ms for pod "kube-proxy-kqtwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.369520  468607 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.768587  468607 pod_ready.go:94] pod "kube-scheduler-embed-certs-818836" is "Ready"
	I1124 03:40:52.768616  468607 pod_ready.go:86] duration metric: took 399.065879ms for pod "kube-scheduler-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.768629  468607 pod_ready.go:40] duration metric: took 1.604655617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:52.832190  468607 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:40:52.835417  468607 out.go:179] * Done! kubectl is now configured to use "embed-certs-818836" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6efc4ca7860c3       1611cd07b61d5       8 seconds ago       Running             busybox                   0                   e3c77ca9f7fed       busybox                                     default
	1c714ace422b1       138784d87c9c5       14 seconds ago      Running             coredns                   0                   bdfbadfad1ed4       coredns-66bc5c9577-mj9gd                    kube-system
	be397b4afce85       66749159455b3       14 seconds ago      Running             storage-provisioner       0                   e34d4f2fbf3ee       storage-provisioner                         kube-system
	cf95865919242       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   d93319665440f       kindnet-tp8zg                               kube-system
	95117708edab7       05baa95f5142d       29 seconds ago      Running             kube-proxy                0                   fb2356fed4bdf       kube-proxy-xg8w4                            kube-system
	02103f0046d80       7eb2c6ff0c5a7       44 seconds ago      Running             kube-controller-manager   0                   2ff2010f77339       kube-controller-manager-no-preload-262280   kube-system
	023306d10623d       b5f57ec6b9867       44 seconds ago      Running             kube-scheduler            0                   ec132d9c3aaed       kube-scheduler-no-preload-262280            kube-system
	0f0cdb21b9f41       a1894772a478e       45 seconds ago      Running             etcd                      0                   2d120cb1cb5d4       etcd-no-preload-262280                      kube-system
	e4efe89b5bce7       43911e833d64d       45 seconds ago      Running             kube-apiserver            0                   b5d0668db9e9a       kube-apiserver-no-preload-262280            kube-system
	
	
	==> containerd <==
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.673916562Z" level=info msg="connecting to shim be397b4afce8525a05276b3a7b1dc032772656b57834670e5afd4dcec6228318" address="unix:///run/containerd/s/5f43b859443edee740bd578455a75a950a1789dc846e6f3612bae93fffa56e11" protocol=ttrpc version=3
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.744969798Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-mj9gd,Uid:875322e9-dddd-4618-beec-76c737d16e3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdfbadfad1ed4b60d5835a593e40a10928671d9e0bc8316e4d9738e714ea8896\""
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.752777131Z" level=info msg="CreateContainer within sandbox \"bdfbadfad1ed4b60d5835a593e40a10928671d9e0bc8316e4d9738e714ea8896\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.774840492Z" level=info msg="StartContainer for \"be397b4afce8525a05276b3a7b1dc032772656b57834670e5afd4dcec6228318\" returns successfully"
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.776759609Z" level=info msg="Container 1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.797295799Z" level=info msg="CreateContainer within sandbox \"bdfbadfad1ed4b60d5835a593e40a10928671d9e0bc8316e4d9738e714ea8896\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4\""
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.798900666Z" level=info msg="StartContainer for \"1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4\""
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.800022108Z" level=info msg="connecting to shim 1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4" address="unix:///run/containerd/s/31aacace18df1bf3670145bc73b7dbb48260829092de200778000bfeacdae2de" protocol=ttrpc version=3
	Nov 24 03:40:42 no-preload-262280 containerd[760]: time="2025-11-24T03:40:42.909396460Z" level=info msg="StartContainer for \"1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4\" returns successfully"
	Nov 24 03:40:46 no-preload-262280 containerd[760]: time="2025-11-24T03:40:46.264416826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:820858e8-9815-41a7-a6c3-43bbfe947f4b,Namespace:default,Attempt:0,}"
	Nov 24 03:40:46 no-preload-262280 containerd[760]: time="2025-11-24T03:40:46.352645822Z" level=info msg="connecting to shim e3c77ca9f7fedf585c665134c5d43e1daa25554bbc4a8d867de7dee57a3e939f" address="unix:///run/containerd/s/e5560b01ec9b1eea8540256781f39da7201a3b88ccb73d2ed3cc50bdd8ed3a4f" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:40:46 no-preload-262280 containerd[760]: time="2025-11-24T03:40:46.415389667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:820858e8-9815-41a7-a6c3-43bbfe947f4b,Namespace:default,Attempt:0,} returns sandbox id \"e3c77ca9f7fedf585c665134c5d43e1daa25554bbc4a8d867de7dee57a3e939f\""
	Nov 24 03:40:46 no-preload-262280 containerd[760]: time="2025-11-24T03:40:46.419043474Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.519221325Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.520965672Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937190"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.523440978Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.526886192Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.527771859Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.108679392s"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.527812926Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.536142371Z" level=info msg="CreateContainer within sandbox \"e3c77ca9f7fedf585c665134c5d43e1daa25554bbc4a8d867de7dee57a3e939f\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.551658822Z" level=info msg="Container 6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.563797209Z" level=info msg="CreateContainer within sandbox \"e3c77ca9f7fedf585c665134c5d43e1daa25554bbc4a8d867de7dee57a3e939f\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f\""
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.565080737Z" level=info msg="StartContainer for \"6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f\""
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.566666223Z" level=info msg="connecting to shim 6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f" address="unix:///run/containerd/s/e5560b01ec9b1eea8540256781f39da7201a3b88ccb73d2ed3cc50bdd8ed3a4f" protocol=ttrpc version=3
	Nov 24 03:40:48 no-preload-262280 containerd[760]: time="2025-11-24T03:40:48.632395433Z" level=info msg="StartContainer for \"6efc4ca7860c3df1267db2a0221aebf2421c86a97c855fad6c73908626a7195f\" returns successfully"
	
	
	==> coredns [1c714ace422b1c7ad3474339d102e6d6e529b3244e7b7ded2d5f65163f4a4dc4] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47901 - 61057 "HINFO IN 2850791332031184546.6905526921411133570. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.028132624s
	
	
	==> describe nodes <==
	Name:               no-preload-262280
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-262280
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=no-preload-262280
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_40_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:40:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-262280
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:40:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:40:54 +0000   Mon, 24 Nov 2025 03:40:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:40:54 +0000   Mon, 24 Nov 2025 03:40:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:40:54 +0000   Mon, 24 Nov 2025 03:40:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:40:54 +0000   Mon, 24 Nov 2025 03:40:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-262280
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                79fbca72-e570-478b-819a-4e66cc7dc3e1
	  Boot ID:                    63a8a852-1462-44b1-9d6f-f77d26e8568f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-mj9gd                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-262280                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-tp8zg                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-262280             250m (12%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-no-preload-262280    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-xg8w4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-262280             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Warning  CgroupV1                 46s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node no-preload-262280 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node no-preload-262280 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x7 over 46s)  kubelet          Node no-preload-262280 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  46s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node no-preload-262280 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node no-preload-262280 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node no-preload-262280 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-262280 event: Registered Node no-preload-262280 in Controller
	  Normal   NodeReady                15s                kubelet          Node no-preload-262280 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:27] overlayfs: idmapped layers are currently not supported
	[Nov24 02:28] overlayfs: idmapped layers are currently not supported
	[Nov24 02:30] overlayfs: idmapped layers are currently not supported
	[  +9.824160] overlayfs: idmapped layers are currently not supported
	[Nov24 02:31] overlayfs: idmapped layers are currently not supported
	[Nov24 02:32] overlayfs: idmapped layers are currently not supported
	[ +27.981383] overlayfs: idmapped layers are currently not supported
	[Nov24 02:33] overlayfs: idmapped layers are currently not supported
	[Nov24 02:34] overlayfs: idmapped layers are currently not supported
	[Nov24 02:35] overlayfs: idmapped layers are currently not supported
	[Nov24 02:36] overlayfs: idmapped layers are currently not supported
	[Nov24 02:37] overlayfs: idmapped layers are currently not supported
	[Nov24 02:38] overlayfs: idmapped layers are currently not supported
	[Nov24 02:39] overlayfs: idmapped layers are currently not supported
	[ +24.837346] overlayfs: idmapped layers are currently not supported
	[Nov24 02:40] overlayfs: idmapped layers are currently not supported
	[ +40.823948] overlayfs: idmapped layers are currently not supported
	[  +1.705989] overlayfs: idmapped layers are currently not supported
	[Nov24 02:42] overlayfs: idmapped layers are currently not supported
	[ +21.661904] overlayfs: idmapped layers are currently not supported
	[Nov24 02:44] overlayfs: idmapped layers are currently not supported
	[  +1.074777] overlayfs: idmapped layers are currently not supported
	[Nov24 02:46] overlayfs: idmapped layers are currently not supported
	[ +19.120392] overlayfs: idmapped layers are currently not supported
	[Nov24 02:48] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [0f0cdb21b9f416ec70e9a682e42f7629ec439caab6d4dd3070e91e4f3347f9a2] <==
	{"level":"warn","ts":"2025-11-24T03:40:15.995514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.078501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.148576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.200007Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.229173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.249904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.272081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.302528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.384668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.452176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.487364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.532673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.574626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.624335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.666850Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.724683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.762223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.801085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.853747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.908737Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.952837Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:16.982649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:17.046709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:17.251173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54270","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T03:40:27.963843Z","caller":"traceutil/trace.go:172","msg":"trace[212483648] transaction","detail":"{read_only:false; response_revision:391; number_of_response:1; }","duration":"104.467992ms","start":"2025-11-24T03:40:27.859342Z","end":"2025-11-24T03:40:27.963810Z","steps":["trace[212483648] 'process raft request'  (duration: 104.124482ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:40:57 up  2:23,  0 user,  load average: 5.26, 3.85, 3.06
	Linux no-preload-262280 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cf95865919242f79d08de7186f2a000f985534d1820a8d95476ba6c09013ab0f] <==
	I1124 03:40:31.735103       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:40:31.824956       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:40:31.825150       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:40:31.825174       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:40:31.825186       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:40:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:40:32.034348       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:40:32.034540       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:40:32.034633       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:40:32.036186       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:40:32.236136       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:40:32.236354       1 metrics.go:72] Registering metrics
	I1124 03:40:32.236555       1 controller.go:711] "Syncing nftables rules"
	I1124 03:40:42.040458       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:40:42.040566       1 main.go:301] handling current node
	I1124 03:40:52.033937       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:40:52.033974       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e4efe89b5bce740d11c2842c5d1aa62daf0e473f21d4b8b8d2e641a31d84cf81] <==
	I1124 03:40:18.733216       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 03:40:18.736979       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 03:40:18.741837       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 03:40:18.809158       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:18.809450       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:40:18.853295       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:18.859753       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:40:19.349193       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:40:19.369653       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:40:19.369869       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:40:20.727359       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:40:20.799911       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:40:20.887437       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:40:20.901043       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:40:20.902503       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:40:20.908844       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:40:21.369264       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:40:22.971714       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:40:22.991881       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:40:23.014896       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:40:26.962737       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:40:27.292544       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:40:27.445268       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:27.487615       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 03:40:54.131455       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:39788: use of closed network connection
	
	
	==> kube-controller-manager [02103f0046d800181ffcbc86a11e512c20c091ee5db459b81d7dae1343cef3dc] <==
	I1124 03:40:26.408553       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:40:26.409235       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 03:40:26.408572       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 03:40:26.411044       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:40:26.419238       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:40:26.419453       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-262280" podCIDRs=["10.244.0.0/24"]
	I1124 03:40:26.419593       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:40:26.429203       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:40:26.429470       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 03:40:26.436511       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:40:26.436532       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:40:26.445905       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:26.451518       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:40:26.456968       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:40:26.459061       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 03:40:26.459346       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 03:40:26.459555       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-262280"
	I1124 03:40:26.459714       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 03:40:26.462852       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:40:26.467002       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:40:26.474122       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 03:40:26.486109       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:26.486324       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:40:26.486417       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:40:46.463443       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [95117708edab7db4f15203f63afc7cd4e58237d7c305b8ce721c7cf427b80ce3] <==
	I1124 03:40:28.981504       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:40:29.094351       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:40:29.259048       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:40:29.259090       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:40:29.259161       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:40:29.531670       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:40:29.531728       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:40:29.567086       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:40:29.583090       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:40:29.583123       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:40:29.585641       1 config.go:200] "Starting service config controller"
	I1124 03:40:29.585658       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:40:29.596249       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:40:29.596312       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:40:29.596339       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:40:29.596344       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:40:29.620963       1 config.go:309] "Starting node config controller"
	I1124 03:40:29.620982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:40:29.621008       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:40:29.697590       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:40:29.697628       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:40:29.697682       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [023306d10623d671ab0ee9497a7be725838e9df82aac87a9f0e200807f3272dc] <==
	I1124 03:40:18.763708       1 serving.go:386] Generated self-signed cert in-memory
	I1124 03:40:22.062823       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 03:40:22.062864       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:40:22.077992       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 03:40:22.079863       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:22.079880       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 03:40:22.133672       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 03:40:22.079888       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 03:40:22.079824       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 03:40:22.140708       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 03:40:22.140915       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 03:40:22.233951       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 03:40:22.243674       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 03:40:22.243674       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:40:24 no-preload-262280 kubelet[2119]: I1124 03:40:24.185464    2119 apiserver.go:52] "Watching apiserver"
	Nov 24 03:40:24 no-preload-262280 kubelet[2119]: I1124 03:40:24.301633    2119 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 03:40:24 no-preload-262280 kubelet[2119]: I1124 03:40:24.628833    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-262280" podStartSLOduration=1.628812923 podStartE2EDuration="1.628812923s" podCreationTimestamp="2025-11-24 03:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:24.585985084 +0000 UTC m=+1.656531374" watchObservedRunningTime="2025-11-24 03:40:24.628812923 +0000 UTC m=+1.699359082"
	Nov 24 03:40:24 no-preload-262280 kubelet[2119]: I1124 03:40:24.708820    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-262280" podStartSLOduration=1.708803304 podStartE2EDuration="1.708803304s" podCreationTimestamp="2025-11-24 03:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:24.69031654 +0000 UTC m=+1.760862699" watchObservedRunningTime="2025-11-24 03:40:24.708803304 +0000 UTC m=+1.779349471"
	Nov 24 03:40:26 no-preload-262280 kubelet[2119]: I1124 03:40:26.439375    2119 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:40:26 no-preload-262280 kubelet[2119]: I1124 03:40:26.441272    2119 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.632400    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8388de5-8f36-444e-864f-efe3b946972c-xtables-lock\") pod \"kube-proxy-xg8w4\" (UID: \"e8388de5-8f36-444e-864f-efe3b946972c\") " pod="kube-system/kube-proxy-xg8w4"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.632448    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmxgz\" (UniqueName: \"kubernetes.io/projected/8b8b163b-5585-4d91-9717-95f656987530-kube-api-access-gmxgz\") pod \"kindnet-tp8zg\" (UID: \"8b8b163b-5585-4d91-9717-95f656987530\") " pod="kube-system/kindnet-tp8zg"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635779    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8388de5-8f36-444e-864f-efe3b946972c-kube-proxy\") pod \"kube-proxy-xg8w4\" (UID: \"e8388de5-8f36-444e-864f-efe3b946972c\") " pod="kube-system/kube-proxy-xg8w4"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635835    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8388de5-8f36-444e-864f-efe3b946972c-lib-modules\") pod \"kube-proxy-xg8w4\" (UID: \"e8388de5-8f36-444e-864f-efe3b946972c\") " pod="kube-system/kube-proxy-xg8w4"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635863    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtt8n\" (UniqueName: \"kubernetes.io/projected/e8388de5-8f36-444e-864f-efe3b946972c-kube-api-access-wtt8n\") pod \"kube-proxy-xg8w4\" (UID: \"e8388de5-8f36-444e-864f-efe3b946972c\") " pod="kube-system/kube-proxy-xg8w4"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635898    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b8b163b-5585-4d91-9717-95f656987530-xtables-lock\") pod \"kindnet-tp8zg\" (UID: \"8b8b163b-5585-4d91-9717-95f656987530\") " pod="kube-system/kindnet-tp8zg"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635924    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8b8b163b-5585-4d91-9717-95f656987530-cni-cfg\") pod \"kindnet-tp8zg\" (UID: \"8b8b163b-5585-4d91-9717-95f656987530\") " pod="kube-system/kindnet-tp8zg"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.635949    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b8b163b-5585-4d91-9717-95f656987530-lib-modules\") pod \"kindnet-tp8zg\" (UID: \"8b8b163b-5585-4d91-9717-95f656987530\") " pod="kube-system/kindnet-tp8zg"
	Nov 24 03:40:27 no-preload-262280 kubelet[2119]: I1124 03:40:27.886517    2119 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 03:40:31 no-preload-262280 kubelet[2119]: I1124 03:40:31.436921    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xg8w4" podStartSLOduration=4.43690131 podStartE2EDuration="4.43690131s" podCreationTimestamp="2025-11-24 03:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:29.85323751 +0000 UTC m=+6.923783677" watchObservedRunningTime="2025-11-24 03:40:31.43690131 +0000 UTC m=+8.507447477"
	Nov 24 03:40:31 no-preload-262280 kubelet[2119]: I1124 03:40:31.892039    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tp8zg" podStartSLOduration=1.758004291 podStartE2EDuration="4.892009095s" podCreationTimestamp="2025-11-24 03:40:27 +0000 UTC" firstStartedPulling="2025-11-24 03:40:28.285911897 +0000 UTC m=+5.356458055" lastFinishedPulling="2025-11-24 03:40:31.4199167 +0000 UTC m=+8.490462859" observedRunningTime="2025-11-24 03:40:31.891154206 +0000 UTC m=+8.961700389" watchObservedRunningTime="2025-11-24 03:40:31.892009095 +0000 UTC m=+8.962555254"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.110387    2119 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.249012    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/430685c9-d2cd-4da8-90bb-666070ea7af5-tmp\") pod \"storage-provisioner\" (UID: \"430685c9-d2cd-4da8-90bb-666070ea7af5\") " pod="kube-system/storage-provisioner"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.249082    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrjsc\" (UniqueName: \"kubernetes.io/projected/430685c9-d2cd-4da8-90bb-666070ea7af5-kube-api-access-wrjsc\") pod \"storage-provisioner\" (UID: \"430685c9-d2cd-4da8-90bb-666070ea7af5\") " pod="kube-system/storage-provisioner"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.349503    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2m6vg\" (UniqueName: \"kubernetes.io/projected/875322e9-dddd-4618-beec-76c737d16e3c-kube-api-access-2m6vg\") pod \"coredns-66bc5c9577-mj9gd\" (UID: \"875322e9-dddd-4618-beec-76c737d16e3c\") " pod="kube-system/coredns-66bc5c9577-mj9gd"
	Nov 24 03:40:42 no-preload-262280 kubelet[2119]: I1124 03:40:42.349578    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/875322e9-dddd-4618-beec-76c737d16e3c-config-volume\") pod \"coredns-66bc5c9577-mj9gd\" (UID: \"875322e9-dddd-4618-beec-76c737d16e3c\") " pod="kube-system/coredns-66bc5c9577-mj9gd"
	Nov 24 03:40:43 no-preload-262280 kubelet[2119]: I1124 03:40:43.888540    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.888519456000001 podStartE2EDuration="13.888519456s" podCreationTimestamp="2025-11-24 03:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:42.889027107 +0000 UTC m=+19.959573274" watchObservedRunningTime="2025-11-24 03:40:43.888519456 +0000 UTC m=+20.959065623"
	Nov 24 03:40:43 no-preload-262280 kubelet[2119]: I1124 03:40:43.907263    2119 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-mj9gd" podStartSLOduration=16.907233513 podStartE2EDuration="16.907233513s" podCreationTimestamp="2025-11-24 03:40:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:43.889176625 +0000 UTC m=+20.959722784" watchObservedRunningTime="2025-11-24 03:40:43.907233513 +0000 UTC m=+20.977779672"
	Nov 24 03:40:46 no-preload-262280 kubelet[2119]: I1124 03:40:46.073905    2119 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f6bk\" (UniqueName: \"kubernetes.io/projected/820858e8-9815-41a7-a6c3-43bbfe947f4b-kube-api-access-2f6bk\") pod \"busybox\" (UID: \"820858e8-9815-41a7-a6c3-43bbfe947f4b\") " pod="default/busybox"
	
	
	==> storage-provisioner [be397b4afce8525a05276b3a7b1dc032772656b57834670e5afd4dcec6228318] <==
	I1124 03:40:42.767588       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:40:42.811362       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:40:42.811414       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:40:42.817106       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:42.827456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:40:42.828066       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:40:42.828389       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"23985941-51d1-473d-8dad-195f98b18f60", APIVersion:"v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-262280_b83dfcf7-d29d-4e4e-b161-2eb9414fe41e became leader
	I1124 03:40:42.828452       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-262280_b83dfcf7-d29d-4e4e-b161-2eb9414fe41e!
	W1124 03:40:42.842119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:42.848699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:40:42.943618       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-262280_b83dfcf7-d29d-4e4e-b161-2eb9414fe41e!
	W1124 03:40:44.852117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:44.859380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:46.862646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:46.867501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:48.871150       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:48.876088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:50.880041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:50.885681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:52.889217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:52.899170       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:54.903238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:54.908589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:56.912371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:56.920533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-262280 -n no-preload-262280
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-262280 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (12.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.67s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-818836 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [558523a2-89e3-43af-9d9f-326d9e1d9629] Pending
helpers_test.go:352: "busybox" [558523a2-89e3-43af-9d9f-326d9e1d9629] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [558523a2-89e3-43af-9d9f-326d9e1d9629] Running
E1124 03:40:57.755817  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003850271s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-818836 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-818836
helpers_test.go:243: (dbg) docker inspect embed-certs-818836:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4",
	        "Created": "2025-11-24T03:40:02.463990203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 469176,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:40:02.542904474Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4/hosts",
	        "LogPath": "/var/lib/docker/containers/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4-json.log",
	        "Name": "/embed-certs-818836",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-818836:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-818836",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4",
	                "LowerDir": "/var/lib/docker/overlay2/2c7aa8849c9ad820565f9f23d196e9e185f2fc05ac0615325ea27f4da72c1af3-init/diff:/var/lib/docker/overlay2/11b197f530f0d571f61892814d8d4c774f7d3e5a97abdd8c5aa182cc99b2d856/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c7aa8849c9ad820565f9f23d196e9e185f2fc05ac0615325ea27f4da72c1af3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c7aa8849c9ad820565f9f23d196e9e185f2fc05ac0615325ea27f4da72c1af3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c7aa8849c9ad820565f9f23d196e9e185f2fc05ac0615325ea27f4da72c1af3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-818836",
	                "Source": "/var/lib/docker/volumes/embed-certs-818836/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-818836",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-818836",
	                "name.minikube.sigs.k8s.io": "embed-certs-818836",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a91155f5a0322a5aab9ebc09616599e4bfe72bb49407d94c7deb2716f8c094d3",
	            "SandboxKey": "/var/run/docker/netns/a91155f5a032",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-818836": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:3d:a2:14:d3:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "91615606eb797a1b1696bed9db8d1fe7d1d91433226c147019609786a547b7b9",
	                    "EndpointID": "e42f96d9c325bd1298eadd29f90d35abc1ace7d658114974a4a778c02f3e5bb3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-818836",
	                        "18d18a9ae732"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-818836 -n embed-certs-818836
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-818836 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-818836 logs -n 25: (1.228174347s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:35 UTC │
	│ delete  │ -p kubernetes-upgrade-850960                                                                                                                                                                                                                        │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ force-systemd-env-574539 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p force-systemd-env-574539                                                                                                                                                                                                                         │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-options-216763 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ cert-options-216763 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ -p cert-options-216763 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p cert-options-216763                                                                                                                                                                                                                              │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-098965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ stop    │ -p old-k8s-version-098965 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-098965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ image   │ old-k8s-version-098965 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ pause   │ -p old-k8s-version-098965 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ unpause │ -p old-k8s-version-098965 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p old-k8s-version-098965                                                                                                                                                                                                                           │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p old-k8s-version-098965                                                                                                                                                                                                                           │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p no-preload-262280 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-262280         │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p cert-expiration-846384                                                                                                                                                                                                                           │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-818836        │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ addons  │ enable metrics-server -p no-preload-262280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-262280         │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ stop    │ -p no-preload-262280 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-262280         │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:39:54
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:39:54.770134  468607 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:39:54.770765  468607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:54.770803  468607 out.go:374] Setting ErrFile to fd 2...
	I1124 03:39:54.770823  468607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:54.771173  468607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:39:54.771694  468607 out.go:368] Setting JSON to false
	I1124 03:39:54.772710  468607 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8523,"bootTime":1763947072,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:39:54.772814  468607 start.go:143] virtualization:  
	I1124 03:39:54.776844  468607 out.go:179] * [embed-certs-818836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:39:54.781644  468607 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:39:54.781732  468607 notify.go:221] Checking for updates...
	I1124 03:39:54.787053  468607 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:39:54.790493  468607 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:39:54.793844  468607 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:39:54.797082  468607 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:39:54.800233  468607 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:39:54.803908  468607 config.go:182] Loaded profile config "no-preload-262280": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:39:54.804064  468607 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:39:54.846350  468607 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:39:54.846478  468607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:39:54.943233  468607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-24 03:39:54.932926558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:39:54.943335  468607 docker.go:319] overlay module found
	I1124 03:39:54.946509  468607 out.go:179] * Using the docker driver based on user configuration
	I1124 03:39:54.950114  468607 start.go:309] selected driver: docker
	I1124 03:39:54.950133  468607 start.go:927] validating driver "docker" against <nil>
	I1124 03:39:54.950147  468607 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:39:54.950879  468607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:39:55.051907  468607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-24 03:39:55.038363177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:39:55.052067  468607 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:39:55.052307  468607 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:39:55.055713  468607 out.go:179] * Using Docker driver with root privileges
	I1124 03:39:55.058665  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:39:55.058771  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:39:55.058786  468607 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:39:55.058875  468607 start.go:353] cluster config:
	{Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:39:55.062215  468607 out.go:179] * Starting "embed-certs-818836" primary control-plane node in "embed-certs-818836" cluster
	I1124 03:39:55.065106  468607 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:39:55.068109  468607 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:39:55.071078  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:39:55.071139  468607 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 03:39:55.071152  468607 cache.go:65] Caching tarball of preloaded images
	I1124 03:39:55.071260  468607 preload.go:238] Found /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 03:39:55.071275  468607 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:39:55.071398  468607 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json ...
	I1124 03:39:55.071424  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json: {Name:mk937c632daa818953aa058a3473ebcd37b1b74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:39:55.071593  468607 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:39:55.094186  468607 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:39:55.094210  468607 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:39:55.094227  468607 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:39:55.094258  468607 start.go:360] acquireMachinesLock for embed-certs-818836: {Name:mk5ce88de168b198a494858bb8201276136df5bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:39:55.094377  468607 start.go:364] duration metric: took 97.543µs to acquireMachinesLock for "embed-certs-818836"
	I1124 03:39:55.094417  468607 start.go:93] Provisioning new machine with config: &{Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:39:55.094497  468607 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:39:53.821541  465459 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.603191329s)
	I1124 03:39:53.821565  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 03:39:53.821584  465459 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:53.821636  465459 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:57.814796  465459 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.993137445s)
	I1124 03:39:57.814820  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:39:57.814838  465459 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:39:57.814894  465459 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:39:55.099888  468607 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:39:55.100165  468607 start.go:159] libmachine.API.Create for "embed-certs-818836" (driver="docker")
	I1124 03:39:55.100219  468607 client.go:173] LocalClient.Create starting
	I1124 03:39:55.100327  468607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem
	I1124 03:39:55.100376  468607 main.go:143] libmachine: Decoding PEM data...
	I1124 03:39:55.100396  468607 main.go:143] libmachine: Parsing certificate...
	I1124 03:39:55.100448  468607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem
	I1124 03:39:55.100500  468607 main.go:143] libmachine: Decoding PEM data...
	I1124 03:39:55.100517  468607 main.go:143] libmachine: Parsing certificate...
	I1124 03:39:55.100910  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:39:55.125795  468607 cli_runner.go:211] docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:39:55.125884  468607 network_create.go:284] running [docker network inspect embed-certs-818836] to gather additional debugging logs...
	I1124 03:39:55.125914  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836
	W1124 03:39:55.143227  468607 cli_runner.go:211] docker network inspect embed-certs-818836 returned with exit code 1
	I1124 03:39:55.143261  468607 network_create.go:287] error running [docker network inspect embed-certs-818836]: docker network inspect embed-certs-818836: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-818836 not found
	I1124 03:39:55.143275  468607 network_create.go:289] output of [docker network inspect embed-certs-818836]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-818836 not found
	
	** /stderr **
	I1124 03:39:55.143372  468607 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:39:55.161548  468607 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-752aaa40bb3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:00:20:e4:71:15} reservation:<nil>}
	I1124 03:39:55.161924  468607 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbb0dee281db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:ff:07:3e:91:0f} reservation:<nil>}
	I1124 03:39:55.162178  468607 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d95ffec60547 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:b5:f2:ed:07:1e} reservation:<nil>}
	I1124 03:39:55.162624  468607 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2c210}
	I1124 03:39:55.162647  468607 network_create.go:124] attempt to create docker network embed-certs-818836 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 03:39:55.162703  468607 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-818836 embed-certs-818836
	I1124 03:39:55.225512  468607 network_create.go:108] docker network embed-certs-818836 192.168.76.0/24 created
	I1124 03:39:55.225548  468607 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-818836" container
	I1124 03:39:55.225630  468607 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:39:55.242034  468607 cli_runner.go:164] Run: docker volume create embed-certs-818836 --label name.minikube.sigs.k8s.io=embed-certs-818836 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:39:55.262160  468607 oci.go:103] Successfully created a docker volume embed-certs-818836
	I1124 03:39:55.262245  468607 cli_runner.go:164] Run: docker run --rm --name embed-certs-818836-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-818836 --entrypoint /usr/bin/test -v embed-certs-818836:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:39:56.023650  468607 oci.go:107] Successfully prepared a docker volume embed-certs-818836
	I1124 03:39:56.023728  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:39:56.023743  468607 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:39:56.023811  468607 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-818836:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:39:58.487593  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:39:58.487627  465459 cache_images.go:125] Successfully loaded all cached images
	I1124 03:39:58.487632  465459 cache_images.go:94] duration metric: took 15.116520084s to LoadCachedImages
	I1124 03:39:58.487645  465459 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1124 03:39:58.487737  465459 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-262280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-262280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:39:58.487802  465459 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:39:58.517432  465459 cni.go:84] Creating CNI manager for ""
	I1124 03:39:58.517454  465459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:39:58.517467  465459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:39:58.517491  465459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-262280 NodeName:no-preload-262280 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:39:58.517604  465459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-262280"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:39:58.517675  465459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:39:58.527708  465459 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:39:58.527826  465459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:39:58.537240  465459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1124 03:39:58.537336  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:39:58.538133  465459 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1124 03:39:58.538622  465459 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1124 03:39:58.544156  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:39:58.544188  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1124 03:39:59.579840  465459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:39:59.602240  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:39:59.612666  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:39:59.612754  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1124 03:39:59.686847  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:39:59.706955  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:39:59.707011  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1124 03:40:00.747521  465459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:00.765344  465459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1124 03:40:00.782659  465459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:00.799074  465459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1124 03:40:00.815268  465459 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:00.821044  465459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:00.834962  465459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:00.961773  465459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:00.983622  465459 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280 for IP: 192.168.85.2
	I1124 03:40:00.983698  465459 certs.go:195] generating shared ca certs ...
	I1124 03:40:00.983731  465459 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:00.983948  465459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:40:00.984027  465459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:40:00.984066  465459 certs.go:257] generating profile certs ...
	I1124 03:40:00.984149  465459 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key
	I1124 03:40:00.984190  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt with IP's: []
	I1124 03:40:01.602129  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt ...
	I1124 03:40:01.602164  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: {Name:mk5c809e6dd128dc33970522909ae40ed13851c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:01.602404  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key ...
	I1124 03:40:01.602420  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key: {Name:mk4c99883f96920c3d389a999045dde9f43e74fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:01.602523  465459 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859
	I1124 03:40:01.602540  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:40:02.066816  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 ...
	I1124 03:40:02.066899  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859: {Name:mkd9f7b00f0b8be089cbce37f7826610732080e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.067142  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859 ...
	I1124 03:40:02.067186  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859: {Name:mkaaed6b4175e7a41645d8c3454f2c44a0203858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.067372  465459 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt
	I1124 03:40:02.067467  465459 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key
	I1124 03:40:02.067543  465459 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key
	I1124 03:40:02.067564  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt with IP's: []
	I1124 03:40:02.465004  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt ...
	I1124 03:40:02.465036  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt: {Name:mkf027bf4f367183ad961bb9001139254f6258cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.465206  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key ...
	I1124 03:40:02.465221  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key: {Name:mk8915392d44290b2ab552251edca0730df8ed0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.465611  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:40:02.465663  465459 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:02.465681  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:40:02.465712  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:02.465746  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:02.465775  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:02.465824  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:02.466427  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:02.490422  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:40:02.538618  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:02.580031  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:02.623593  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:40:02.657524  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:02.687220  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:02.710371  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:02.732274  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:40:02.755007  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:40:02.777653  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:02.805037  465459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:02.826328  465459 ssh_runner.go:195] Run: openssl version
	I1124 03:40:02.842808  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:40:02.861247  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.869101  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.869168  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.973780  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:02.983869  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:40:03.003344  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.014606  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.014678  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.100872  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:03.119219  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:03.132707  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.143890  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.143956  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.227580  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:03.241329  465459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:03.250558  465459 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:40:03.250662  465459 kubeadm.go:401] StartCluster: {Name:no-preload-262280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-262280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:03.250758  465459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:03.250841  465459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:03.389740  465459 cri.go:89] found id: ""
	I1124 03:40:03.389818  465459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:03.413175  465459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:03.434949  465459 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:40:03.435019  465459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:03.450572  465459 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:03.450591  465459 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:03.450643  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:03.481203  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:03.481293  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:03.505063  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:03.526828  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:03.526899  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:03.542273  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:03.554380  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:03.554459  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:03.565133  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:03.583655  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:03.583761  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:03.600101  465459 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:40:03.695740  465459 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:40:03.695802  465459 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:40:03.729178  465459 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:40:03.729476  465459 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:40:03.729518  465459 kubeadm.go:319] OS: Linux
	I1124 03:40:03.729563  465459 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:40:03.729611  465459 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:40:03.729658  465459 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:40:03.729710  465459 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:40:03.729759  465459 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:40:03.729806  465459 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:40:03.729851  465459 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:40:03.729911  465459 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:40:03.729958  465459 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:40:03.847775  465459 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:40:03.847886  465459 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:40:03.847977  465459 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:40:03.860909  465459 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:40:02.325904  468607 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-818836:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (6.302044362s)
	I1124 03:40:02.325939  468607 kic.go:203] duration metric: took 6.302193098s to extract preloaded images to volume ...
	W1124 03:40:02.326078  468607 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:40:02.326190  468607 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:40:02.445610  468607 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-818836 --name embed-certs-818836 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-818836 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-818836 --network embed-certs-818836 --ip 192.168.76.2 --volume embed-certs-818836:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:40:02.830161  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Running}}
	I1124 03:40:02.858743  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:02.883367  468607 cli_runner.go:164] Run: docker exec embed-certs-818836 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:40:02.940884  468607 oci.go:144] the created container "embed-certs-818836" has a running status.
	I1124 03:40:02.940913  468607 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa...
	I1124 03:40:03.398411  468607 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:40:03.429853  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:03.464067  468607 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:40:03.464088  468607 kic_runner.go:114] Args: [docker exec --privileged embed-certs-818836 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:40:03.540196  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:03.576062  468607 machine.go:94] provisionDockerMachine start ...
	I1124 03:40:03.576168  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:03.596498  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.597706  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:03.597742  468607 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:40:03.598783  468607 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 03:40:03.865701  465459 out.go:252]   - Generating certificates and keys ...
	I1124 03:40:03.865794  465459 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:40:03.865861  465459 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:40:04.261018  465459 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:40:04.423750  465459 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:40:04.784877  465459 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:40:05.469508  465459 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:40:05.670184  465459 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:40:05.670529  465459 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-262280] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:40:05.916276  465459 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:40:05.916671  465459 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-262280] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:40:06.295195  465459 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:40:06.703517  465459 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:40:07.221344  465459 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:40:07.221867  465459 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:40:06.756947  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-818836
	
	I1124 03:40:06.757024  468607 ubuntu.go:182] provisioning hostname "embed-certs-818836"
	I1124 03:40:06.757117  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:06.780855  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:06.781159  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:06.781170  468607 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-818836 && echo "embed-certs-818836" | sudo tee /etc/hostname
	I1124 03:40:06.952924  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-818836
	
	I1124 03:40:06.953068  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:06.976988  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:06.977313  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:06.977329  468607 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-818836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-818836/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-818836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:40:07.145464  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:40:07.145556  468607 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-255205/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-255205/.minikube}
	I1124 03:40:07.145614  468607 ubuntu.go:190] setting up certificates
	I1124 03:40:07.145642  468607 provision.go:84] configureAuth start
	I1124 03:40:07.145739  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.169212  468607 provision.go:143] copyHostCerts
	I1124 03:40:07.169290  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem, removing ...
	I1124 03:40:07.169299  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem
	I1124 03:40:07.169376  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem (1078 bytes)
	I1124 03:40:07.169475  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem, removing ...
	I1124 03:40:07.169480  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem
	I1124 03:40:07.169506  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem (1123 bytes)
	I1124 03:40:07.169572  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem, removing ...
	I1124 03:40:07.169578  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem
	I1124 03:40:07.169604  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem (1675 bytes)
	I1124 03:40:07.169661  468607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem org=jenkins.embed-certs-818836 san=[127.0.0.1 192.168.76.2 embed-certs-818836 localhost minikube]
	I1124 03:40:07.418050  468607 provision.go:177] copyRemoteCerts
	I1124 03:40:07.418164  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:40:07.418250  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.436857  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.541668  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:40:07.562105  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:40:07.582528  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:40:07.603626  468607 provision.go:87] duration metric: took 457.949417ms to configureAuth
	I1124 03:40:07.603697  468607 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:40:07.603915  468607 config.go:182] Loaded profile config "embed-certs-818836": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:07.603945  468607 machine.go:97] duration metric: took 4.027864554s to provisionDockerMachine
	I1124 03:40:07.603968  468607 client.go:176] duration metric: took 12.503739627s to LocalClient.Create
	I1124 03:40:07.603998  468607 start.go:167] duration metric: took 12.503833413s to libmachine.API.Create "embed-certs-818836"
	I1124 03:40:07.604072  468607 start.go:293] postStartSetup for "embed-certs-818836" (driver="docker")
	I1124 03:40:07.604107  468607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:40:07.604203  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:40:07.604265  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.632600  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.737983  468607 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:40:07.742314  468607 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:40:07.742341  468607 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:40:07.742353  468607 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/addons for local assets ...
	I1124 03:40:07.742407  468607 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/files for local assets ...
	I1124 03:40:07.742485  468607 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem -> 2570692.pem in /etc/ssl/certs
	I1124 03:40:07.742591  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:40:07.751254  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:07.775588  468607 start.go:296] duration metric: took 171.476748ms for postStartSetup
	I1124 03:40:07.776070  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.810247  468607 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json ...
	I1124 03:40:07.810536  468607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:40:07.810584  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.829698  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.934319  468607 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:40:07.940379  468607 start.go:128] duration metric: took 12.845864213s to createHost
	I1124 03:40:07.940407  468607 start.go:83] releasing machines lock for "embed-certs-818836", held for 12.84601335s
	I1124 03:40:07.940518  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.966549  468607 ssh_runner.go:195] Run: cat /version.json
	I1124 03:40:07.966614  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.966858  468607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:40:07.966916  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:08.009694  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:08.010496  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:08.140825  468607 ssh_runner.go:195] Run: systemctl --version
	I1124 03:40:08.236306  468607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:40:08.241952  468607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:40:08.242033  468607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:40:08.275925  468607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:40:08.276006  468607 start.go:496] detecting cgroup driver to use...
	I1124 03:40:08.276054  468607 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:40:08.276163  468607 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:40:08.293354  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:40:08.309121  468607 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:40:08.309273  468607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:40:08.329161  468607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:40:08.349309  468607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:40:08.512169  468607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:40:08.692876  468607 docker.go:234] disabling docker service ...
	I1124 03:40:08.692943  468607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:40:08.722865  468607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:40:08.738391  468607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:40:08.914395  468607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:40:09.078224  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:40:09.099626  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:40:09.127201  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:40:09.137475  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:40:09.151390  468607 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 03:40:09.151466  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 03:40:09.161530  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:40:09.179218  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:40:09.188732  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:40:09.198154  468607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:40:09.206565  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:40:09.215833  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:40:09.225156  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:40:09.234765  468607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:40:09.243300  468607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:40:09.251671  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:09.434190  468607 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:40:09.629101  468607 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:40:09.629177  468607 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:40:09.633574  468607 start.go:564] Will wait 60s for crictl version
	I1124 03:40:09.633686  468607 ssh_runner.go:195] Run: which crictl
	I1124 03:40:09.637799  468607 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:40:09.680020  468607 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:40:09.680112  468607 ssh_runner.go:195] Run: containerd --version
	I1124 03:40:09.701052  468607 ssh_runner.go:195] Run: containerd --version
	I1124 03:40:09.728551  468607 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:40:09.731602  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:40:09.752927  468607 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:40:09.757138  468607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:09.767237  468607 kubeadm.go:884] updating cluster {Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:40:09.767356  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:40:09.767434  468607 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:07.945073  465459 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:40:08.356082  465459 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:40:08.704960  465459 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:40:09.943963  465459 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:40:10.216943  465459 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:40:10.218580  465459 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:40:10.237543  465459 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:40:09.801793  468607 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:40:09.801818  468607 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:40:09.801887  468607 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:09.828434  468607 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:40:09.828460  468607 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:40:09.828491  468607 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 03:40:09.828596  468607 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-818836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:40:09.828666  468607 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:40:09.855719  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:40:09.855746  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:09.855754  468607 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:40:09.855777  468607 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-818836 NodeName:embed-certs-818836 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:40:09.855896  468607 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-818836"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:40:09.855970  468607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:40:09.864082  468607 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:40:09.864155  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:09.871799  468607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 03:40:09.885236  468607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:09.903151  468607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1124 03:40:09.916330  468607 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:09.920755  468607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:09.930245  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:10.095373  468607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:10.120719  468607 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836 for IP: 192.168.76.2
	I1124 03:40:10.120751  468607 certs.go:195] generating shared ca certs ...
	I1124 03:40:10.120775  468607 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.120926  468607 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:40:10.121022  468607 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:40:10.121036  468607 certs.go:257] generating profile certs ...
	I1124 03:40:10.121101  468607 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key
	I1124 03:40:10.121117  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt with IP's: []
	I1124 03:40:10.420574  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt ...
	I1124 03:40:10.420618  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt: {Name:mk242703eac12cbe34e4028bdd5925f7440b86e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.420945  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key ...
	I1124 03:40:10.420962  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key: {Name:mk4f7dbe6cf87f427019f2b9bb878908f82573e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.421164  468607 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253
	I1124 03:40:10.421185  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:40:10.579421  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 ...
	I1124 03:40:10.579459  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253: {Name:mk072dbea8dc92562bf332b98a65b57fa9581398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.579707  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253 ...
	I1124 03:40:10.579733  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253: {Name:mk3986530288979c5c9a2178817e35e45248f3c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.579920  468607 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt
	I1124 03:40:10.580110  468607 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key
	I1124 03:40:10.580235  468607 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key
	I1124 03:40:10.580282  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt with IP's: []
	I1124 03:40:10.650382  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt ...
	I1124 03:40:10.650422  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt: {Name:mk7002a63ade6dd6830536f0b45108488d8d2647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.650709  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key ...
	I1124 03:40:10.650730  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key: {Name:mk9ed88761ece5843396144a4fbfafba4af7e713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.651036  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:40:10.651117  468607 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:10.651134  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:40:10.651185  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:10.651246  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:10.651301  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:10.651375  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:10.652050  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:10.674232  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:40:10.698101  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:10.717381  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:10.737149  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:40:10.761648  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:10.786481  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:10.807220  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:10.827613  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:10.849625  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:40:10.870797  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:40:10.892331  468607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:10.908461  468607 ssh_runner.go:195] Run: openssl version
	I1124 03:40:10.916101  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:40:10.926608  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.931358  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.931455  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.976219  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:10.986375  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:10.996391  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.017389  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.017511  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.093548  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:11.109631  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:40:11.122383  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.127328  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.127425  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.171896  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:11.181990  468607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:11.186817  468607 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:40:11.186902  468607 kubeadm.go:401] StartCluster: {Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:11.187015  468607 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:11.187107  468607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:11.229657  468607 cri.go:89] found id: ""
	I1124 03:40:11.229767  468607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:11.239862  468607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:11.249588  468607 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:40:11.249708  468607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:11.261397  468607 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:11.261464  468607 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:11.261537  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:11.271489  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:11.271603  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:11.282245  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:11.295430  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:11.295544  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:11.303936  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:11.314965  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:11.315086  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:11.322532  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:11.331297  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:11.331410  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:11.339587  468607 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:40:11.388094  468607 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:40:11.388694  468607 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:40:11.418975  468607 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:40:11.419097  468607 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:40:11.419162  468607 kubeadm.go:319] OS: Linux
	I1124 03:40:11.419229  468607 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:40:11.419310  468607 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:40:11.419397  468607 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:40:11.419482  468607 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:40:11.419545  468607 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:40:11.419609  468607 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:40:11.419672  468607 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:40:11.419733  468607 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:40:11.419793  468607 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:40:11.498745  468607 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:40:11.498892  468607 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:40:11.499019  468607 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:40:11.505807  468607 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:40:10.241345  465459 out.go:252]   - Booting up control plane ...
	I1124 03:40:10.241455  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:40:10.245314  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:40:10.248607  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:40:10.281242  465459 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:40:10.281374  465459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:40:10.290260  465459 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:40:10.290359  465459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:40:10.290400  465459 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:40:10.449824  465459 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:40:10.450005  465459 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:40:11.952880  465459 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500833117s
	I1124 03:40:11.954116  465459 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:40:11.954483  465459 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:40:11.954823  465459 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:40:11.955791  465459 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:40:11.512278  468607 out.go:252]   - Generating certificates and keys ...
	I1124 03:40:11.512384  468607 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:40:11.512475  468607 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:40:12.156551  468607 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:40:12.440381  468607 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:40:13.054828  468607 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:40:14.412107  468607 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:40:17.439040  465459 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.482829056s
	I1124 03:40:14.824196  468607 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:40:14.824831  468607 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-818836 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:40:15.040863  468607 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:40:15.040998  468607 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-818836 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:40:15.376085  468607 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:40:15.719552  468607 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:40:16.788559  468607 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:40:16.789083  468607 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:40:17.179360  468607 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:40:17.589911  468607 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:40:18.716938  468607 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:40:19.434256  468607 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:40:19.598171  468607 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:40:19.599352  468607 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:40:19.612523  468607 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:40:19.615809  468607 out.go:252]   - Booting up control plane ...
	I1124 03:40:19.615923  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:40:19.616002  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:40:19.616070  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:40:19.643244  468607 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:40:19.643372  468607 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:40:19.651919  468607 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:40:19.660667  468607 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:40:19.661493  468607 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:40:20.959069  465459 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.003836426s
	I1124 03:40:22.125067  465459 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.16861254s
	I1124 03:40:22.188271  465459 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:40:22.216515  465459 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:40:22.258578  465459 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:40:22.259036  465459 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-262280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:40:22.271087  465459 kubeadm.go:319] [bootstrap-token] Using token: 2yptao.r7yd6l7ev1yowcqn
	I1124 03:40:22.274016  465459 out.go:252]   - Configuring RBAC rules ...
	I1124 03:40:22.274139  465459 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:40:22.285868  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:40:22.302245  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:40:22.309475  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:40:22.314669  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:40:22.324840  465459 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:40:22.533610  465459 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:40:22.993832  465459 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:40:23.539106  465459 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:40:23.540728  465459 kubeadm.go:319] 
	I1124 03:40:23.540809  465459 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:40:23.540814  465459 kubeadm.go:319] 
	I1124 03:40:23.540891  465459 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:40:23.540895  465459 kubeadm.go:319] 
	I1124 03:40:23.540920  465459 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:40:23.541365  465459 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:40:23.541428  465459 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:40:23.541434  465459 kubeadm.go:319] 
	I1124 03:40:23.541487  465459 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:40:23.541491  465459 kubeadm.go:319] 
	I1124 03:40:23.541539  465459 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:40:23.541542  465459 kubeadm.go:319] 
	I1124 03:40:23.541594  465459 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:40:23.541669  465459 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:40:23.541737  465459 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:40:23.541741  465459 kubeadm.go:319] 
	I1124 03:40:23.542069  465459 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:40:23.542155  465459 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:40:23.542159  465459 kubeadm.go:319] 
	I1124 03:40:23.542500  465459 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2yptao.r7yd6l7ev1yowcqn \
	I1124 03:40:23.542614  465459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:40:23.542853  465459 kubeadm.go:319] 	--control-plane 
	I1124 03:40:23.542871  465459 kubeadm.go:319] 
	I1124 03:40:23.543221  465459 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:40:23.543231  465459 kubeadm.go:319] 
	I1124 03:40:23.547828  465459 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2yptao.r7yd6l7ev1yowcqn \
	I1124 03:40:23.550982  465459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:40:23.555511  465459 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:40:23.555736  465459 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:40:23.555841  465459 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:40:23.555857  465459 cni.go:84] Creating CNI manager for ""
	I1124 03:40:23.555865  465459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:23.559067  465459 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:40:19.836180  468607 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:40:19.836307  468607 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:40:20.837911  468607 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001791556s
	I1124 03:40:20.841824  468607 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:40:20.841924  468607 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 03:40:20.842025  468607 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:40:20.842109  468607 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:40:23.561962  465459 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:40:23.570649  465459 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:40:23.570666  465459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:40:23.611043  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:40:24.448553  465459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:40:24.448680  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:24.448750  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-262280 minikube.k8s.io/updated_at=2025_11_24T03_40_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-262280 minikube.k8s.io/primary=true
	I1124 03:40:25.025787  465459 ops.go:34] apiserver oom_adj: -16
	I1124 03:40:25.025937  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:25.526394  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:26.025997  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:26.526754  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:27.026641  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:27.253055  465459 kubeadm.go:1114] duration metric: took 2.804418537s to wait for elevateKubeSystemPrivileges
	I1124 03:40:27.253082  465459 kubeadm.go:403] duration metric: took 24.002425527s to StartCluster
	I1124 03:40:27.253101  465459 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:27.253165  465459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:40:27.253834  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:27.254034  465459 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:40:27.254180  465459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:40:27.254424  465459 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:40:27.254486  465459 addons.go:70] Setting storage-provisioner=true in profile "no-preload-262280"
	I1124 03:40:27.254500  465459 addons.go:239] Setting addon storage-provisioner=true in "no-preload-262280"
	I1124 03:40:27.254522  465459 host.go:66] Checking if "no-preload-262280" exists ...
	I1124 03:40:27.255029  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.255348  465459 config.go:182] Loaded profile config "no-preload-262280": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:27.255425  465459 addons.go:70] Setting default-storageclass=true in profile "no-preload-262280"
	I1124 03:40:27.255459  465459 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-262280"
	I1124 03:40:27.255742  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.258534  465459 out.go:179] * Verifying Kubernetes components...
	I1124 03:40:27.264721  465459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:27.290687  465459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:40:27.293638  465459 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:27.293665  465459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:40:27.293734  465459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-262280
	I1124 03:40:27.295179  465459 addons.go:239] Setting addon default-storageclass=true in "no-preload-262280"
	I1124 03:40:27.295223  465459 host.go:66] Checking if "no-preload-262280" exists ...
	I1124 03:40:27.295646  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.333873  465459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/no-preload-262280/id_rsa Username:docker}
	I1124 03:40:27.342194  465459 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:27.342217  465459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:40:27.342282  465459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-262280
	I1124 03:40:27.369752  465459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/no-preload-262280/id_rsa Username:docker}
	I1124 03:40:28.289510  468607 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.446711872s
	I1124 03:40:28.718064  468607 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.876138727s
	I1124 03:40:28.086729  465459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:28.166898  465459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:40:28.167031  465459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:28.202605  465459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:29.603255  465459 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.436193485s)
	I1124 03:40:29.604024  465459 node_ready.go:35] waiting up to 6m0s for node "no-preload-262280" to be "Ready" ...
	I1124 03:40:29.604243  465459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.437316052s)
	I1124 03:40:29.604267  465459 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:40:30.149139  465459 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-262280" context rescaled to 1 replicas
	I1124 03:40:30.266899  465459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.064217856s)
	I1124 03:40:30.272444  465459 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 03:40:30.843974  468607 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002059314s
	I1124 03:40:30.870609  468607 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:40:30.901638  468607 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:40:30.924179  468607 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:40:30.924719  468607 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-818836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:40:30.940184  468607 kubeadm.go:319] [bootstrap-token] Using token: 0bimeo.bzidkyv9i8e7nkw3
	I1124 03:40:30.943266  468607 out.go:252]   - Configuring RBAC rules ...
	I1124 03:40:30.943387  468607 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:40:30.951610  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:40:30.963677  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:40:30.971959  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:40:30.977923  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:40:30.986249  468607 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:40:31.251471  468607 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:40:31.778202  468607 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:40:32.251684  468607 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:40:32.253477  468607 kubeadm.go:319] 
	I1124 03:40:32.253550  468607 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:40:32.253555  468607 kubeadm.go:319] 
	I1124 03:40:32.253632  468607 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:40:32.253637  468607 kubeadm.go:319] 
	I1124 03:40:32.253662  468607 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:40:32.254164  468607 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:40:32.254227  468607 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:40:32.254231  468607 kubeadm.go:319] 
	I1124 03:40:32.254285  468607 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:40:32.254288  468607 kubeadm.go:319] 
	I1124 03:40:32.254336  468607 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:40:32.254339  468607 kubeadm.go:319] 
	I1124 03:40:32.254391  468607 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:40:32.254466  468607 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:40:32.254534  468607 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:40:32.254538  468607 kubeadm.go:319] 
	I1124 03:40:32.254839  468607 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:40:32.254921  468607 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:40:32.254928  468607 kubeadm.go:319] 
	I1124 03:40:32.255259  468607 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0bimeo.bzidkyv9i8e7nkw3 \
	I1124 03:40:32.255368  468607 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:40:32.255600  468607 kubeadm.go:319] 	--control-plane 
	I1124 03:40:32.255610  468607 kubeadm.go:319] 
	I1124 03:40:32.255896  468607 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:40:32.255905  468607 kubeadm.go:319] 
	I1124 03:40:32.256198  468607 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0bimeo.bzidkyv9i8e7nkw3 \
	I1124 03:40:32.256558  468607 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:40:32.262002  468607 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:40:32.262227  468607 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:40:32.262331  468607 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:40:32.262347  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:40:32.262355  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:32.265575  468607 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:40:30.275374  465459 addons.go:530] duration metric: took 3.020937085s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1124 03:40:31.607716  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:32.268802  468607 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:40:32.276058  468607 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:40:32.276076  468607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:40:32.304040  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:40:32.950060  468607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:40:32.950194  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:32.950260  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-818836 minikube.k8s.io/updated_at=2025_11_24T03_40_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-818836 minikube.k8s.io/primary=true
	I1124 03:40:33.247296  468607 ops.go:34] apiserver oom_adj: -16
	I1124 03:40:33.247413  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:33.747810  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:34.247563  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:34.747727  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:35.248529  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:35.747874  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:36.248065  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:36.747517  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:37.248357  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:37.375914  468607 kubeadm.go:1114] duration metric: took 4.425764478s to wait for elevateKubeSystemPrivileges
	I1124 03:40:37.375948  468607 kubeadm.go:403] duration metric: took 26.189049705s to StartCluster
	I1124 03:40:37.375965  468607 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:37.376029  468607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:40:37.377428  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:37.377669  468607 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:40:37.377785  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:40:37.378042  468607 config.go:182] Loaded profile config "embed-certs-818836": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:37.378089  468607 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:40:37.378159  468607 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-818836"
	I1124 03:40:37.378172  468607 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-818836"
	I1124 03:40:37.378198  468607 host.go:66] Checking if "embed-certs-818836" exists ...
	I1124 03:40:37.378697  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.378976  468607 addons.go:70] Setting default-storageclass=true in profile "embed-certs-818836"
	I1124 03:40:37.379003  468607 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-818836"
	I1124 03:40:37.379254  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.381419  468607 out.go:179] * Verifying Kubernetes components...
	I1124 03:40:37.384428  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:37.421715  468607 addons.go:239] Setting addon default-storageclass=true in "embed-certs-818836"
	I1124 03:40:37.421763  468607 host.go:66] Checking if "embed-certs-818836" exists ...
	I1124 03:40:37.422190  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.443094  468607 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:40:34.107205  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	W1124 03:40:36.107495  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:37.445972  468607 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:37.445995  468607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:40:37.446062  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:37.468083  468607 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:37.468107  468607 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:40:37.468173  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:37.505843  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:37.512810  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:37.807453  468607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:37.824901  468607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:37.825083  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:40:37.844459  468607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:38.592240  468607 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 03:40:38.594605  468607 node_ready.go:35] waiting up to 6m0s for node "embed-certs-818836" to be "Ready" ...
	I1124 03:40:38.651892  468607 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:40:38.655002  468607 addons.go:530] duration metric: took 1.276905995s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:40:39.096916  468607 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-818836" context rescaled to 1 replicas
	W1124 03:40:38.606995  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	W1124 03:40:40.607344  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:42.608225  465459 node_ready.go:49] node "no-preload-262280" is "Ready"
	I1124 03:40:42.608272  465459 node_ready.go:38] duration metric: took 13.004210314s for node "no-preload-262280" to be "Ready" ...
	I1124 03:40:42.608287  465459 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:40:42.608350  465459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:42.623406  465459 api_server.go:72] duration metric: took 15.369343221s to wait for apiserver process to appear ...
	I1124 03:40:42.623436  465459 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:40:42.623469  465459 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:40:42.633313  465459 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:40:42.634411  465459 api_server.go:141] control plane version: v1.34.1
	I1124 03:40:42.634433  465459 api_server.go:131] duration metric: took 10.990663ms to wait for apiserver health ...
	I1124 03:40:42.634442  465459 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:40:42.638347  465459 system_pods.go:59] 8 kube-system pods found
	I1124 03:40:42.638381  465459 system_pods.go:61] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.638387  465459 system_pods.go:61] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.638392  465459 system_pods.go:61] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.638396  465459 system_pods.go:61] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.638401  465459 system_pods.go:61] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.638404  465459 system_pods.go:61] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.638407  465459 system_pods.go:61] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.638413  465459 system_pods.go:61] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:42.638420  465459 system_pods.go:74] duration metric: took 3.972643ms to wait for pod list to return data ...
	I1124 03:40:42.638431  465459 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:40:42.641761  465459 default_sa.go:45] found service account: "default"
	I1124 03:40:42.641824  465459 default_sa.go:55] duration metric: took 3.386704ms for default service account to be created ...
	I1124 03:40:42.641868  465459 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:40:42.645101  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:42.645134  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.645141  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.645147  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.645155  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.645160  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.645164  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.645168  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.645173  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:42.645193  465459 retry.go:31] will retry after 242.077653ms: missing components: kube-dns
	I1124 03:40:42.893628  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:42.893678  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.893684  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.893699  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.893704  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.893709  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.893713  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.893716  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.893720  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:42.893822  465459 retry.go:31] will retry after 373.532935ms: missing components: kube-dns
	W1124 03:40:40.597355  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:42.597817  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:44.598213  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	I1124 03:40:43.271122  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:43.271161  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:43.271172  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:43.271178  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:43.271182  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:43.271187  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:43.271191  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:43.271195  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:43.271206  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:43.271221  465459 retry.go:31] will retry after 322.6325ms: missing components: kube-dns
	I1124 03:40:43.599918  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:43.600007  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:43.600023  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:43.600030  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:43.600035  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:43.600040  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:43.600044  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:43.600048  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:43.600051  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:43.600066  465459 retry.go:31] will retry after 394.949668ms: missing components: kube-dns
	I1124 03:40:44.001892  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:44.001938  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Running
	I1124 03:40:44.001946  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:44.001952  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:44.001960  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:44.001965  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:44.001968  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:44.001972  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:44.001976  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:44.001989  465459 system_pods.go:126] duration metric: took 1.36009666s to wait for k8s-apps to be running ...
	I1124 03:40:44.001998  465459 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:40:44.002065  465459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:40:44.023562  465459 system_svc.go:56] duration metric: took 21.553336ms WaitForService to wait for kubelet
	I1124 03:40:44.023598  465459 kubeadm.go:587] duration metric: took 16.769539879s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:40:44.023618  465459 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:40:44.027009  465459 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:40:44.027046  465459 node_conditions.go:123] node cpu capacity is 2
	I1124 03:40:44.027060  465459 node_conditions.go:105] duration metric: took 3.437042ms to run NodePressure ...
	I1124 03:40:44.027074  465459 start.go:242] waiting for startup goroutines ...
	I1124 03:40:44.027110  465459 start.go:247] waiting for cluster config update ...
	I1124 03:40:44.027129  465459 start.go:256] writing updated cluster config ...
	I1124 03:40:44.027439  465459 ssh_runner.go:195] Run: rm -f paused
	I1124 03:40:44.032809  465459 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:44.036889  465459 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mj9gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.042142  465459 pod_ready.go:94] pod "coredns-66bc5c9577-mj9gd" is "Ready"
	I1124 03:40:44.042172  465459 pod_ready.go:86] duration metric: took 5.207096ms for pod "coredns-66bc5c9577-mj9gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.044894  465459 pod_ready.go:83] waiting for pod "etcd-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.050138  465459 pod_ready.go:94] pod "etcd-no-preload-262280" is "Ready"
	I1124 03:40:44.050222  465459 pod_ready.go:86] duration metric: took 5.300135ms for pod "etcd-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.052994  465459 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.057831  465459 pod_ready.go:94] pod "kube-apiserver-no-preload-262280" is "Ready"
	I1124 03:40:44.057868  465459 pod_ready.go:86] duration metric: took 4.8387ms for pod "kube-apiserver-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.060783  465459 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.437093  465459 pod_ready.go:94] pod "kube-controller-manager-no-preload-262280" is "Ready"
	I1124 03:40:44.437124  465459 pod_ready.go:86] duration metric: took 376.313274ms for pod "kube-controller-manager-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.637747  465459 pod_ready.go:83] waiting for pod "kube-proxy-xg8w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.042982  465459 pod_ready.go:94] pod "kube-proxy-xg8w4" is "Ready"
	I1124 03:40:45.043021  465459 pod_ready.go:86] duration metric: took 405.246191ms for pod "kube-proxy-xg8w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.238605  465459 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.636771  465459 pod_ready.go:94] pod "kube-scheduler-no-preload-262280" is "Ready"
	I1124 03:40:45.636842  465459 pod_ready.go:86] duration metric: took 398.208005ms for pod "kube-scheduler-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.636877  465459 pod_ready.go:40] duration metric: took 1.604024878s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:45.700045  465459 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:40:45.703311  465459 out.go:179] * Done! kubectl is now configured to use "no-preload-262280" cluster and "default" namespace by default
	W1124 03:40:47.097978  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:49.098467  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	I1124 03:40:49.600289  468607 node_ready.go:49] node "embed-certs-818836" is "Ready"
	I1124 03:40:49.600325  468607 node_ready.go:38] duration metric: took 11.005685237s for node "embed-certs-818836" to be "Ready" ...
	I1124 03:40:49.600342  468607 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:40:49.600401  468607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:49.616102  468607 api_server.go:72] duration metric: took 12.238396901s to wait for apiserver process to appear ...
	I1124 03:40:49.616131  468607 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:40:49.616151  468607 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:40:49.625663  468607 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 03:40:49.628248  468607 api_server.go:141] control plane version: v1.34.1
	I1124 03:40:49.628298  468607 api_server.go:131] duration metric: took 12.158646ms to wait for apiserver health ...
	I1124 03:40:49.628308  468607 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:40:49.635456  468607 system_pods.go:59] 8 kube-system pods found
	I1124 03:40:49.635501  468607 system_pods.go:61] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.635509  468607 system_pods.go:61] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.635527  468607 system_pods.go:61] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.635531  468607 system_pods.go:61] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.635536  468607 system_pods.go:61] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.635542  468607 system_pods.go:61] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.635546  468607 system_pods.go:61] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.635559  468607 system_pods.go:61] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.635566  468607 system_pods.go:74] duration metric: took 7.25158ms to wait for pod list to return data ...
	I1124 03:40:49.635579  468607 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:40:49.639861  468607 default_sa.go:45] found service account: "default"
	I1124 03:40:49.639903  468607 default_sa.go:55] duration metric: took 4.317754ms for default service account to be created ...
	I1124 03:40:49.639914  468607 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:40:49.642908  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:49.642943  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.642950  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.642956  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.642961  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.642975  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.642979  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.642984  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.642992  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.643018  468607 retry.go:31] will retry after 271.674831ms: missing components: kube-dns
	I1124 03:40:49.919376  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:49.919415  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.919423  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.919429  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.919435  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.919440  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.919444  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.919448  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.919455  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.919474  468607 retry.go:31] will retry after 335.268613ms: missing components: kube-dns
	I1124 03:40:50.262160  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:50.262218  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:50.262226  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:50.262264  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:50.262281  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:50.262290  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:50.262298  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:50.262302  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:50.262312  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:50.262349  468607 retry.go:31] will retry after 385.617551ms: missing components: kube-dns
	I1124 03:40:50.651970  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:50.652010  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:50.652018  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:50.652025  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:50.652030  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:50.652034  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:50.652038  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:50.652041  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:50.652047  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:50.652064  468607 retry.go:31] will retry after 470.580451ms: missing components: kube-dns
	I1124 03:40:51.133462  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:51.133497  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Running
	I1124 03:40:51.133504  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:51.133509  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:51.133514  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:51.133518  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:51.133528  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:51.133533  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:51.133538  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Running
	I1124 03:40:51.133558  468607 system_pods.go:126] duration metric: took 1.493636996s to wait for k8s-apps to be running ...
	I1124 03:40:51.133566  468607 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:40:51.133625  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:40:51.151193  468607 system_svc.go:56] duration metric: took 17.617707ms WaitForService to wait for kubelet
	I1124 03:40:51.151222  468607 kubeadm.go:587] duration metric: took 13.773521156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:40:51.151242  468607 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:40:51.158998  468607 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:40:51.159035  468607 node_conditions.go:123] node cpu capacity is 2
	I1124 03:40:51.159163  468607 node_conditions.go:105] duration metric: took 7.914387ms to run NodePressure ...
	I1124 03:40:51.159180  468607 start.go:242] waiting for startup goroutines ...
	I1124 03:40:51.159201  468607 start.go:247] waiting for cluster config update ...
	I1124 03:40:51.159225  468607 start.go:256] writing updated cluster config ...
	I1124 03:40:51.159566  468607 ssh_runner.go:195] Run: rm -f paused
	I1124 03:40:51.163938  468607 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:51.233364  468607 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dgvvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.238633  468607 pod_ready.go:94] pod "coredns-66bc5c9577-dgvvg" is "Ready"
	I1124 03:40:51.238668  468607 pod_ready.go:86] duration metric: took 5.226756ms for pod "coredns-66bc5c9577-dgvvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.242048  468607 pod_ready.go:83] waiting for pod "etcd-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.247506  468607 pod_ready.go:94] pod "etcd-embed-certs-818836" is "Ready"
	I1124 03:40:51.247534  468607 pod_ready.go:86] duration metric: took 5.457921ms for pod "etcd-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.250505  468607 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.256168  468607 pod_ready.go:94] pod "kube-apiserver-embed-certs-818836" is "Ready"
	I1124 03:40:51.256200  468607 pod_ready.go:86] duration metric: took 5.665265ms for pod "kube-apiserver-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.258827  468607 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.568969  468607 pod_ready.go:94] pod "kube-controller-manager-embed-certs-818836" is "Ready"
	I1124 03:40:51.568996  468607 pod_ready.go:86] duration metric: took 310.144443ms for pod "kube-controller-manager-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.768346  468607 pod_ready.go:83] waiting for pod "kube-proxy-kqtwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.168601  468607 pod_ready.go:94] pod "kube-proxy-kqtwg" is "Ready"
	I1124 03:40:52.168630  468607 pod_ready.go:86] duration metric: took 400.250484ms for pod "kube-proxy-kqtwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.369520  468607 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.768587  468607 pod_ready.go:94] pod "kube-scheduler-embed-certs-818836" is "Ready"
	I1124 03:40:52.768616  468607 pod_ready.go:86] duration metric: took 399.065879ms for pod "kube-scheduler-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.768629  468607 pod_ready.go:40] duration metric: took 1.604655617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:52.832190  468607 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:40:52.835417  468607 out.go:179] * Done! kubectl is now configured to use "embed-certs-818836" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a58a4728ac10f       1611cd07b61d5       8 seconds ago       Running             busybox                   0                   0308b01a7a26f       busybox                                      default
	6260374b03f86       138784d87c9c5       14 seconds ago      Running             coredns                   0                   2458228456a3b       coredns-66bc5c9577-dgvvg                     kube-system
	bdaea43dac204       ba04bb24b9575       14 seconds ago      Running             storage-provisioner       0                   8152ad9444328       storage-provisioner                          kube-system
	466fe30e398c2       b1a8c6f707935       25 seconds ago      Running             kindnet-cni               0                   54530eb20f030       kindnet-fxtfb                                kube-system
	4d1ac5a789d22       05baa95f5142d       26 seconds ago      Running             kube-proxy                0                   2c4ac076c25c1       kube-proxy-kqtwg                             kube-system
	a59e80e4497b4       b5f57ec6b9867       42 seconds ago      Running             kube-scheduler            0                   b5e98495343e1       kube-scheduler-embed-certs-818836            kube-system
	06008282a01c0       43911e833d64d       42 seconds ago      Running             kube-apiserver            0                   74f1dfbc093ce       kube-apiserver-embed-certs-818836            kube-system
	6b9c388047cfa       7eb2c6ff0c5a7       42 seconds ago      Running             kube-controller-manager   0                   42ade8eb3674e       kube-controller-manager-embed-certs-818836   kube-system
	ac1d217ae9676       a1894772a478e       43 seconds ago      Running             etcd                      0                   db52836f67dc1       etcd-embed-certs-818836                      kube-system
	
	
	==> containerd <==
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.057907964Z" level=info msg="connecting to shim bdaea43dac204948bdf28895d9cb5bdf2db2c74e81ace882300ed5718f87add6" address="unix:///run/containerd/s/57f2a81bbcac182bad45dbeab33a1327cfff18ad488d5ce902af0bfdf0e7bc5e" protocol=ttrpc version=3
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.100785614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dgvvg,Uid:0ef9d488-59a5-4f43-9832-c97f1c895bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2458228456a3be9fe22927f01f937d8dff13347bb763bfbef568481f4f4b7b5c\""
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.109653385Z" level=info msg="CreateContainer within sandbox \"2458228456a3be9fe22927f01f937d8dff13347bb763bfbef568481f4f4b7b5c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.122564109Z" level=info msg="Container 6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.135428844Z" level=info msg="CreateContainer within sandbox \"2458228456a3be9fe22927f01f937d8dff13347bb763bfbef568481f4f4b7b5c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7\""
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.136416140Z" level=info msg="StartContainer for \"6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7\""
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.137607113Z" level=info msg="connecting to shim 6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7" address="unix:///run/containerd/s/a4e37a1f6b438936ba3f660d2ba31a71943470dca57caaebd80ed68edb829e37" protocol=ttrpc version=3
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.198426517Z" level=info msg="StartContainer for \"bdaea43dac204948bdf28895d9cb5bdf2db2c74e81ace882300ed5718f87add6\" returns successfully"
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.230648924Z" level=info msg="StartContainer for \"6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7\" returns successfully"
	Nov 24 03:40:53 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:53.383842754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:558523a2-89e3-43af-9d9f-326d9e1d9629,Namespace:default,Attempt:0,}"
	Nov 24 03:40:53 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:53.447848870Z" level=info msg="connecting to shim 0308b01a7a26fea59abf7edb5f2a7031f830ee4e945d7900726ad1d0604c1492" address="unix:///run/containerd/s/23d48a1f08b22aec323cafe57c4c4fb059dde661aeeebebe704b7840c9169c9c" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:40:53 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:53.499028110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:558523a2-89e3-43af-9d9f-326d9e1d9629,Namespace:default,Attempt:0,} returns sandbox id \"0308b01a7a26fea59abf7edb5f2a7031f830ee4e945d7900726ad1d0604c1492\""
	Nov 24 03:40:53 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:53.504031586Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.531248955Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.535067333Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.537704928Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.541782319Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.543332597Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.039101863s"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.543388548Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.559734707Z" level=info msg="CreateContainer within sandbox \"0308b01a7a26fea59abf7edb5f2a7031f830ee4e945d7900726ad1d0604c1492\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.573406183Z" level=info msg="Container a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.585552965Z" level=info msg="CreateContainer within sandbox \"0308b01a7a26fea59abf7edb5f2a7031f830ee4e945d7900726ad1d0604c1492\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9\""
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.586502788Z" level=info msg="StartContainer for \"a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9\""
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.588716955Z" level=info msg="connecting to shim a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9" address="unix:///run/containerd/s/23d48a1f08b22aec323cafe57c4c4fb059dde661aeeebebe704b7840c9169c9c" protocol=ttrpc version=3
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.676448344Z" level=info msg="StartContainer for \"a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9\" returns successfully"
	
	
	==> coredns [6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43956 - 37230 "HINFO IN 739609537041384603.8632231235251508514. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.079132064s
	
	
	==> describe nodes <==
	Name:               embed-certs-818836
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-818836
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-818836
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_40_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:40:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-818836
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:41:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:41:02 +0000   Mon, 24 Nov 2025 03:40:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:41:02 +0000   Mon, 24 Nov 2025 03:40:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:41:02 +0000   Mon, 24 Nov 2025 03:40:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:41:02 +0000   Mon, 24 Nov 2025 03:40:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-818836
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                1beb3fc5-b491-4e20-a9b9-ad38a1b35e92
	  Boot ID:                    63a8a852-1462-44b1-9d6f-f77d26e8568f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-dgvvg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-embed-certs-818836                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         32s
	  kube-system                 kindnet-fxtfb                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-embed-certs-818836             250m (12%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-818836    200m (10%)    0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-kqtwg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-embed-certs-818836             100m (5%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   NodeHasSufficientMemory  44s (x8 over 44s)  kubelet          Node embed-certs-818836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    44s (x8 over 44s)  kubelet          Node embed-certs-818836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     44s (x7 over 44s)  kubelet          Node embed-certs-818836 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  44s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 33s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 33s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  32s                kubelet          Node embed-certs-818836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    32s                kubelet          Node embed-certs-818836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     32s                kubelet          Node embed-certs-818836 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           28s                node-controller  Node embed-certs-818836 event: Registered Node embed-certs-818836 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-818836 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:27] overlayfs: idmapped layers are currently not supported
	[Nov24 02:28] overlayfs: idmapped layers are currently not supported
	[Nov24 02:30] overlayfs: idmapped layers are currently not supported
	[  +9.824160] overlayfs: idmapped layers are currently not supported
	[Nov24 02:31] overlayfs: idmapped layers are currently not supported
	[Nov24 02:32] overlayfs: idmapped layers are currently not supported
	[ +27.981383] overlayfs: idmapped layers are currently not supported
	[Nov24 02:33] overlayfs: idmapped layers are currently not supported
	[Nov24 02:34] overlayfs: idmapped layers are currently not supported
	[Nov24 02:35] overlayfs: idmapped layers are currently not supported
	[Nov24 02:36] overlayfs: idmapped layers are currently not supported
	[Nov24 02:37] overlayfs: idmapped layers are currently not supported
	[Nov24 02:38] overlayfs: idmapped layers are currently not supported
	[Nov24 02:39] overlayfs: idmapped layers are currently not supported
	[ +24.837346] overlayfs: idmapped layers are currently not supported
	[Nov24 02:40] overlayfs: idmapped layers are currently not supported
	[ +40.823948] overlayfs: idmapped layers are currently not supported
	[  +1.705989] overlayfs: idmapped layers are currently not supported
	[Nov24 02:42] overlayfs: idmapped layers are currently not supported
	[ +21.661904] overlayfs: idmapped layers are currently not supported
	[Nov24 02:44] overlayfs: idmapped layers are currently not supported
	[  +1.074777] overlayfs: idmapped layers are currently not supported
	[Nov24 02:46] overlayfs: idmapped layers are currently not supported
	[ +19.120392] overlayfs: idmapped layers are currently not supported
	[Nov24 02:48] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [ac1d217ae967618cbe817fd20ce47ce5cb82bbe446e86ca4529a98da239abdf7] <==
	{"level":"warn","ts":"2025-11-24T03:40:25.873553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.919807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.921154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.941933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.961019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.976792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.994961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.021776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.041879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.070442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.080946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.118876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.146734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.186759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.224841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.271473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.298803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.388399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.423077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.452751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.474020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.497747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.524876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.580699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.733470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49294","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:41:04 up  2:23,  0 user,  load average: 4.84, 3.81, 3.05
	Linux embed-certs-818836 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [466fe30e398c25b51e46fe99b224055705f1cf68fe2bb27f8a8daa065373d23d] <==
	I1124 03:40:39.230199       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:40:39.230426       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 03:40:39.230553       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:40:39.230564       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:40:39.230578       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:40:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:40:39.432550       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:40:39.432731       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:40:39.432778       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:40:39.434006       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:40:39.733771       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:40:39.733800       1 metrics.go:72] Registering metrics
	I1124 03:40:39.734036       1 controller.go:711] "Syncing nftables rules"
	I1124 03:40:49.436805       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:40:49.436882       1 main.go:301] handling current node
	I1124 03:40:59.432570       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:40:59.432618       1 main.go:301] handling current node
	
	
	==> kube-apiserver [06008282a01c0b88fab50226602b3a4cc42c51fa3b8c8cee4a7d3d29f430950a] <==
	I1124 03:40:28.614790       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 03:40:28.618426       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:40:28.683666       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:28.689620       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:40:28.724556       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:40:28.771948       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:40:28.788976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:28.789247       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:40:29.313030       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:40:29.345951       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:40:29.346136       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:40:30.676322       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:40:30.745285       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:40:30.826682       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:40:30.836003       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 03:40:30.837380       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:40:30.852808       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:40:31.449556       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:40:31.721331       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:40:31.776793       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:40:31.801120       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:40:36.754712       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:40:37.425269       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:40:37.577618       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:37.588850       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6b9c388047cfaf63599101a93c74576aad5ddfbe42d36bc9d2587f8610c0b185] <==
	I1124 03:40:36.493298       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:40:36.494421       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:40:36.494437       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 03:40:36.494800       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 03:40:36.495009       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-818836"
	I1124 03:40:36.495160       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 03:40:36.495260       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:40:36.495342       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:40:36.495632       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:40:36.495662       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:40:36.496094       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:40:36.496408       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:40:36.496595       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:40:36.496719       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:40:36.497479       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:40:36.498367       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 03:40:36.500910       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:40:36.501127       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:40:36.504598       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:36.510959       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:40:36.512065       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:40:36.528541       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:36.528569       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:40:36.528578       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:40:51.497321       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4d1ac5a789d22eca3c9aec74f820ba93b3ff927e4ac76703af976882df0f285e] <==
	I1124 03:40:38.210327       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:40:38.309570       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:40:38.409726       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:40:38.409774       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 03:40:38.409907       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:40:38.454951       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:40:38.455010       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:40:38.459511       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:40:38.459862       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:40:38.459876       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:40:38.461544       1 config.go:200] "Starting service config controller"
	I1124 03:40:38.461555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:40:38.461572       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:40:38.461577       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:40:38.461589       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:40:38.461593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:40:38.466326       1 config.go:309] "Starting node config controller"
	I1124 03:40:38.466511       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:40:38.466609       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:40:38.563656       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:40:38.564600       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:40:38.564630       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a59e80e4497b4929b98a64a841dc410b4ba2a701446d53829e16139bc9d77a8b] <==
	E1124 03:40:28.733045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:40:28.733299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:40:28.733554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:40:28.733988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:40:28.734185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:40:28.734353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:40:28.734648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:40:29.558668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 03:40:29.593259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:40:29.678555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:40:29.678978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:40:29.696638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:40:29.712980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:40:29.950771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:40:29.975983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:40:29.980747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:40:30.007688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:40:30.139616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:40:30.139697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:40:30.139759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:40:30.143540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:40:30.143618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:40:30.166069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:40:30.187032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1124 03:40:32.276815       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.832544    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f021efe-9818-47f9-9567-504428fa8b11-lib-modules\") pod \"kindnet-fxtfb\" (UID: \"5f021efe-9818-47f9-9567-504428fa8b11\") " pod="kube-system/kindnet-fxtfb"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.932983    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a89f17a9-6fd2-47fd-b106-b177e8575a6a-kube-proxy\") pod \"kube-proxy-kqtwg\" (UID: \"a89f17a9-6fd2-47fd-b106-b177e8575a6a\") " pod="kube-system/kube-proxy-kqtwg"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.933046    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a89f17a9-6fd2-47fd-b106-b177e8575a6a-xtables-lock\") pod \"kube-proxy-kqtwg\" (UID: \"a89f17a9-6fd2-47fd-b106-b177e8575a6a\") " pod="kube-system/kube-proxy-kqtwg"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.933097    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a89f17a9-6fd2-47fd-b106-b177e8575a6a-lib-modules\") pod \"kube-proxy-kqtwg\" (UID: \"a89f17a9-6fd2-47fd-b106-b177e8575a6a\") " pod="kube-system/kube-proxy-kqtwg"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.933138    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8sh2\" (UniqueName: \"kubernetes.io/projected/a89f17a9-6fd2-47fd-b106-b177e8575a6a-kube-api-access-m8sh2\") pod \"kube-proxy-kqtwg\" (UID: \"a89f17a9-6fd2-47fd-b106-b177e8575a6a\") " pod="kube-system/kube-proxy-kqtwg"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: E1124 03:40:36.942493    1470 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: E1124 03:40:36.942541    1470 projected.go:196] Error preparing data for projected volume kube-api-access-xm5rz for pod kube-system/kindnet-fxtfb: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: E1124 03:40:36.943761    1470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f021efe-9818-47f9-9567-504428fa8b11-kube-api-access-xm5rz podName:5f021efe-9818-47f9-9567-504428fa8b11 nodeName:}" failed. No retries permitted until 2025-11-24 03:40:37.443702896 +0000 UTC m=+5.803124863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xm5rz" (UniqueName: "kubernetes.io/projected/5f021efe-9818-47f9-9567-504428fa8b11-kube-api-access-xm5rz") pod "kindnet-fxtfb" (UID: "5f021efe-9818-47f9-9567-504428fa8b11") : configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.044030    1470 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.044068    1470 projected.go:196] Error preparing data for projected volume kube-api-access-m8sh2 for pod kube-system/kube-proxy-kqtwg: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.044178    1470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a89f17a9-6fd2-47fd-b106-b177e8575a6a-kube-api-access-m8sh2 podName:a89f17a9-6fd2-47fd-b106-b177e8575a6a nodeName:}" failed. No retries permitted until 2025-11-24 03:40:37.544155017 +0000 UTC m=+5.903577009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m8sh2" (UniqueName: "kubernetes.io/projected/a89f17a9-6fd2-47fd-b106-b177e8575a6a-kube-api-access-m8sh2") pod "kube-proxy-kqtwg" (UID: "a89f17a9-6fd2-47fd-b106-b177e8575a6a") : configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.538454    1470 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.538485    1470 projected.go:196] Error preparing data for projected volume kube-api-access-xm5rz for pod kube-system/kindnet-fxtfb: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.538570    1470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f021efe-9818-47f9-9567-504428fa8b11-kube-api-access-xm5rz podName:5f021efe-9818-47f9-9567-504428fa8b11 nodeName:}" failed. No retries permitted until 2025-11-24 03:40:38.538550273 +0000 UTC m=+6.897972248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xm5rz" (UniqueName: "kubernetes.io/projected/5f021efe-9818-47f9-9567-504428fa8b11-kube-api-access-xm5rz") pod "kindnet-fxtfb" (UID: "5f021efe-9818-47f9-9567-504428fa8b11") : configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: I1124 03:40:37.643727    1470 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 03:40:39 embed-certs-818836 kubelet[1470]: I1124 03:40:39.013629    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kqtwg" podStartSLOduration=3.013602252 podStartE2EDuration="3.013602252s" podCreationTimestamp="2025-11-24 03:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:39.013368814 +0000 UTC m=+7.372790789" watchObservedRunningTime="2025-11-24 03:40:39.013602252 +0000 UTC m=+7.373024218"
	Nov 24 03:40:40 embed-certs-818836 kubelet[1470]: I1124 03:40:40.064696    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fxtfb" podStartSLOduration=4.06467528 podStartE2EDuration="4.06467528s" podCreationTimestamp="2025-11-24 03:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:40.061533017 +0000 UTC m=+8.420954992" watchObservedRunningTime="2025-11-24 03:40:40.06467528 +0000 UTC m=+8.424097255"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.526399    1470 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.648175    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgp8j\" (UniqueName: \"kubernetes.io/projected/0ef9d488-59a5-4f43-9832-c97f1c895bdd-kube-api-access-cgp8j\") pod \"coredns-66bc5c9577-dgvvg\" (UID: \"0ef9d488-59a5-4f43-9832-c97f1c895bdd\") " pod="kube-system/coredns-66bc5c9577-dgvvg"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.648240    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b0205ba1-f93d-444f-88a8-2d4eec603213-tmp\") pod \"storage-provisioner\" (UID: \"b0205ba1-f93d-444f-88a8-2d4eec603213\") " pod="kube-system/storage-provisioner"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.648264    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5dr\" (UniqueName: \"kubernetes.io/projected/b0205ba1-f93d-444f-88a8-2d4eec603213-kube-api-access-zv5dr\") pod \"storage-provisioner\" (UID: \"b0205ba1-f93d-444f-88a8-2d4eec603213\") " pod="kube-system/storage-provisioner"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.648286    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ef9d488-59a5-4f43-9832-c97f1c895bdd-config-volume\") pod \"coredns-66bc5c9577-dgvvg\" (UID: \"0ef9d488-59a5-4f43-9832-c97f1c895bdd\") " pod="kube-system/coredns-66bc5c9577-dgvvg"
	Nov 24 03:40:51 embed-certs-818836 kubelet[1470]: I1124 03:40:51.107730    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.107708724 podStartE2EDuration="13.107708724s" podCreationTimestamp="2025-11-24 03:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:51.088433727 +0000 UTC m=+19.447855694" watchObservedRunningTime="2025-11-24 03:40:51.107708724 +0000 UTC m=+19.467130691"
	Nov 24 03:40:51 embed-certs-818836 kubelet[1470]: I1124 03:40:51.107857    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dgvvg" podStartSLOduration=14.107850403 podStartE2EDuration="14.107850403s" podCreationTimestamp="2025-11-24 03:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:51.106664074 +0000 UTC m=+19.466086049" watchObservedRunningTime="2025-11-24 03:40:51.107850403 +0000 UTC m=+19.467272378"
	Nov 24 03:40:53 embed-certs-818836 kubelet[1470]: I1124 03:40:53.170547    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnt4g\" (UniqueName: \"kubernetes.io/projected/558523a2-89e3-43af-9d9f-326d9e1d9629-kube-api-access-cnt4g\") pod \"busybox\" (UID: \"558523a2-89e3-43af-9d9f-326d9e1d9629\") " pod="default/busybox"
	
	
	==> storage-provisioner [bdaea43dac204948bdf28895d9cb5bdf2db2c74e81ace882300ed5718f87add6] <==
	I1124 03:40:50.198000       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:40:50.266662       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:40:50.266726       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:40:50.283251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:50.293273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:40:50.293652       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:40:50.294024       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-818836_9f050f1e-62b2-4d60-af55-6500e2d54406!
	I1124 03:40:50.295422       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d88f2cb-63eb-466e-8cde-49b8ebb184fc", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-818836_9f050f1e-62b2-4d60-af55-6500e2d54406 became leader
	W1124 03:40:50.296129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:50.307875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:40:50.394893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-818836_9f050f1e-62b2-4d60-af55-6500e2d54406!
	W1124 03:40:52.311440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:52.318451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:54.322101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:54.327443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:56.331080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:56.340182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:58.343055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:58.349135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:00.355624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:00.365545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:02.368661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:02.374294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:04.377180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:04.382410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-818836 -n embed-certs-818836
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-818836 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-818836
helpers_test.go:243: (dbg) docker inspect embed-certs-818836:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4",
	        "Created": "2025-11-24T03:40:02.463990203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 469176,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:40:02.542904474Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4/hostname",
	        "HostsPath": "/var/lib/docker/containers/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4/hosts",
	        "LogPath": "/var/lib/docker/containers/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4/18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4-json.log",
	        "Name": "/embed-certs-818836",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-818836:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-818836",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "18d18a9ae732bc879a3ffbbfec593a2ee20bc57bf9848c7a9878a7d4ad9fb9b4",
	                "LowerDir": "/var/lib/docker/overlay2/2c7aa8849c9ad820565f9f23d196e9e185f2fc05ac0615325ea27f4da72c1af3-init/diff:/var/lib/docker/overlay2/11b197f530f0d571f61892814d8d4c774f7d3e5a97abdd8c5aa182cc99b2d856/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2c7aa8849c9ad820565f9f23d196e9e185f2fc05ac0615325ea27f4da72c1af3/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2c7aa8849c9ad820565f9f23d196e9e185f2fc05ac0615325ea27f4da72c1af3/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2c7aa8849c9ad820565f9f23d196e9e185f2fc05ac0615325ea27f4da72c1af3/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-818836",
	                "Source": "/var/lib/docker/volumes/embed-certs-818836/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-818836",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-818836",
	                "name.minikube.sigs.k8s.io": "embed-certs-818836",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a91155f5a0322a5aab9ebc09616599e4bfe72bb49407d94c7deb2716f8c094d3",
	            "SandboxKey": "/var/run/docker/netns/a91155f5a032",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-818836": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:3d:a2:14:d3:23",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "91615606eb797a1b1696bed9db8d1fe7d1d91433226c147019609786a547b7b9",
	                    "EndpointID": "e42f96d9c325bd1298eadd29f90d35abc1ace7d658114974a4a778c02f3e5bb3",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-818836",
	                        "18d18a9ae732"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-818836 -n embed-certs-818836
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-818836 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-818836 logs -n 25: (1.197000015s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:35 UTC │
	│ delete  │ -p kubernetes-upgrade-850960                                                                                                                                                                                                                        │ kubernetes-upgrade-850960 │ jenkins │ v1.37.0 │ 24 Nov 25 03:35 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ force-systemd-env-574539 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p force-systemd-env-574539                                                                                                                                                                                                                         │ force-systemd-env-574539  │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p cert-options-216763 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ cert-options-216763 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ ssh     │ -p cert-options-216763 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ delete  │ -p cert-options-216763                                                                                                                                                                                                                              │ cert-options-216763       │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:36 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:36 UTC │ 24 Nov 25 03:38 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-098965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ stop    │ -p old-k8s-version-098965 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-098965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:38 UTC │
	│ start   │ -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:38 UTC │ 24 Nov 25 03:39 UTC │
	│ image   │ old-k8s-version-098965 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ pause   │ -p old-k8s-version-098965 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ unpause │ -p old-k8s-version-098965 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p old-k8s-version-098965                                                                                                                                                                                                                           │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p old-k8s-version-098965                                                                                                                                                                                                                           │ old-k8s-version-098965    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p no-preload-262280 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-262280         │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ start   │ -p cert-expiration-846384 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ delete  │ -p cert-expiration-846384                                                                                                                                                                                                                           │ cert-expiration-846384    │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:39 UTC │
	│ start   │ -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-818836        │ jenkins │ v1.37.0 │ 24 Nov 25 03:39 UTC │ 24 Nov 25 03:40 UTC │
	│ addons  │ enable metrics-server -p no-preload-262280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-262280         │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │ 24 Nov 25 03:40 UTC │
	│ stop    │ -p no-preload-262280 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-262280         │ jenkins │ v1.37.0 │ 24 Nov 25 03:40 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:39:54
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:39:54.770134  468607 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:39:54.770765  468607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:54.770803  468607 out.go:374] Setting ErrFile to fd 2...
	I1124 03:39:54.770823  468607 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:39:54.771173  468607 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:39:54.771694  468607 out.go:368] Setting JSON to false
	I1124 03:39:54.772710  468607 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8523,"bootTime":1763947072,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:39:54.772814  468607 start.go:143] virtualization:  
	I1124 03:39:54.776844  468607 out.go:179] * [embed-certs-818836] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:39:54.781644  468607 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:39:54.781732  468607 notify.go:221] Checking for updates...
	I1124 03:39:54.787053  468607 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:39:54.790493  468607 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:39:54.793844  468607 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:39:54.797082  468607 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:39:54.800233  468607 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:39:54.803908  468607 config.go:182] Loaded profile config "no-preload-262280": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:39:54.804064  468607 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:39:54.846350  468607 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:39:54.846478  468607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:39:54.943233  468607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-24 03:39:54.932926558 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:39:54.943335  468607 docker.go:319] overlay module found
	I1124 03:39:54.946509  468607 out.go:179] * Using the docker driver based on user configuration
	I1124 03:39:54.950114  468607 start.go:309] selected driver: docker
	I1124 03:39:54.950133  468607 start.go:927] validating driver "docker" against <nil>
	I1124 03:39:54.950147  468607 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:39:54.950879  468607 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:39:55.051907  468607 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-24 03:39:55.038363177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:39:55.052067  468607 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:39:55.052307  468607 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:39:55.055713  468607 out.go:179] * Using Docker driver with root privileges
	I1124 03:39:55.058665  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:39:55.058771  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:39:55.058786  468607 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:39:55.058875  468607 start.go:353] cluster config:
	{Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:39:55.062215  468607 out.go:179] * Starting "embed-certs-818836" primary control-plane node in "embed-certs-818836" cluster
	I1124 03:39:55.065106  468607 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:39:55.068109  468607 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:39:55.071078  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:39:55.071139  468607 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 03:39:55.071152  468607 cache.go:65] Caching tarball of preloaded images
	I1124 03:39:55.071260  468607 preload.go:238] Found /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 03:39:55.071275  468607 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:39:55.071398  468607 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json ...
	I1124 03:39:55.071424  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json: {Name:mk937c632daa818953aa058a3473ebcd37b1b74f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:39:55.071593  468607 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:39:55.094186  468607 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:39:55.094210  468607 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:39:55.094227  468607 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:39:55.094258  468607 start.go:360] acquireMachinesLock for embed-certs-818836: {Name:mk5ce88de168b198a494858bb8201276136df5bf Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:39:55.094377  468607 start.go:364] duration metric: took 97.543µs to acquireMachinesLock for "embed-certs-818836"
	I1124 03:39:55.094417  468607 start.go:93] Provisioning new machine with config: &{Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:39:55.094497  468607 start.go:125] createHost starting for "" (driver="docker")
	I1124 03:39:53.821541  465459 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.603191329s)
	I1124 03:39:53.821565  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 03:39:53.821584  465459 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:53.821636  465459 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1124 03:39:57.814796  465459 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (3.993137445s)
	I1124 03:39:57.814820  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 03:39:57.814838  465459 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:39:57.814894  465459 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1124 03:39:55.099888  468607 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:39:55.100165  468607 start.go:159] libmachine.API.Create for "embed-certs-818836" (driver="docker")
	I1124 03:39:55.100219  468607 client.go:173] LocalClient.Create starting
	I1124 03:39:55.100327  468607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem
	I1124 03:39:55.100376  468607 main.go:143] libmachine: Decoding PEM data...
	I1124 03:39:55.100396  468607 main.go:143] libmachine: Parsing certificate...
	I1124 03:39:55.100448  468607 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem
	I1124 03:39:55.100500  468607 main.go:143] libmachine: Decoding PEM data...
	I1124 03:39:55.100517  468607 main.go:143] libmachine: Parsing certificate...
	I1124 03:39:55.100910  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:39:55.125795  468607 cli_runner.go:211] docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:39:55.125884  468607 network_create.go:284] running [docker network inspect embed-certs-818836] to gather additional debugging logs...
	I1124 03:39:55.125914  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836
	W1124 03:39:55.143227  468607 cli_runner.go:211] docker network inspect embed-certs-818836 returned with exit code 1
	I1124 03:39:55.143261  468607 network_create.go:287] error running [docker network inspect embed-certs-818836]: docker network inspect embed-certs-818836: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-818836 not found
	I1124 03:39:55.143275  468607 network_create.go:289] output of [docker network inspect embed-certs-818836]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-818836 not found
	
	** /stderr **
	I1124 03:39:55.143372  468607 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:39:55.161548  468607 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-752aaa40bb3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:00:20:e4:71:15} reservation:<nil>}
	I1124 03:39:55.161924  468607 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbb0dee281db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:ff:07:3e:91:0f} reservation:<nil>}
	I1124 03:39:55.162178  468607 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d95ffec60547 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:b5:f2:ed:07:1e} reservation:<nil>}
	I1124 03:39:55.162624  468607 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a2c210}
	I1124 03:39:55.162647  468607 network_create.go:124] attempt to create docker network embed-certs-818836 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 03:39:55.162703  468607 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-818836 embed-certs-818836
	I1124 03:39:55.225512  468607 network_create.go:108] docker network embed-certs-818836 192.168.76.0/24 created
	I1124 03:39:55.225548  468607 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-818836" container
	I1124 03:39:55.225630  468607 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:39:55.242034  468607 cli_runner.go:164] Run: docker volume create embed-certs-818836 --label name.minikube.sigs.k8s.io=embed-certs-818836 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:39:55.262160  468607 oci.go:103] Successfully created a docker volume embed-certs-818836
	I1124 03:39:55.262245  468607 cli_runner.go:164] Run: docker run --rm --name embed-certs-818836-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-818836 --entrypoint /usr/bin/test -v embed-certs-818836:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:39:56.023650  468607 oci.go:107] Successfully prepared a docker volume embed-certs-818836
	I1124 03:39:56.023728  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:39:56.023743  468607 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:39:56.023811  468607 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-818836:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 03:39:58.487593  465459 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21975-255205/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 03:39:58.487627  465459 cache_images.go:125] Successfully loaded all cached images
	I1124 03:39:58.487632  465459 cache_images.go:94] duration metric: took 15.116520084s to LoadCachedImages
	I1124 03:39:58.487645  465459 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1124 03:39:58.487737  465459 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-262280 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-262280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:39:58.487802  465459 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:39:58.517432  465459 cni.go:84] Creating CNI manager for ""
	I1124 03:39:58.517454  465459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:39:58.517467  465459 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:39:58.517491  465459 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-262280 NodeName:no-preload-262280 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:39:58.517604  465459 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-262280"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:39:58.517675  465459 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:39:58.527708  465459 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 03:39:58.527826  465459 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 03:39:58.537240  465459 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1124 03:39:58.537336  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 03:39:58.538133  465459 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1124 03:39:58.538622  465459 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1124 03:39:58.544156  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 03:39:58.544188  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1124 03:39:59.579840  465459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:39:59.602240  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 03:39:59.612666  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 03:39:59.612754  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1124 03:39:59.686847  465459 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 03:39:59.706955  465459 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 03:39:59.707011  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1124 03:40:00.747521  465459 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:00.765344  465459 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1124 03:40:00.782659  465459 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:00.799074  465459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1124 03:40:00.815268  465459 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:00.821044  465459 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:00.834962  465459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:00.961773  465459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:00.983622  465459 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280 for IP: 192.168.85.2
	I1124 03:40:00.983698  465459 certs.go:195] generating shared ca certs ...
	I1124 03:40:00.983731  465459 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:00.983948  465459 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:40:00.984027  465459 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:40:00.984066  465459 certs.go:257] generating profile certs ...
	I1124 03:40:00.984149  465459 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key
	I1124 03:40:00.984190  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt with IP's: []
	I1124 03:40:01.602129  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt ...
	I1124 03:40:01.602164  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: {Name:mk5c809e6dd128dc33970522909ae40ed13851c5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:01.602404  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key ...
	I1124 03:40:01.602420  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.key: {Name:mk4c99883f96920c3d389a999045dde9f43e74fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:01.602523  465459 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859
	I1124 03:40:01.602540  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 03:40:02.066816  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 ...
	I1124 03:40:02.066899  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859: {Name:mkd9f7b00f0b8be089cbce37f7826610732080e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.067142  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859 ...
	I1124 03:40:02.067186  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859: {Name:mkaaed6b4175e7a41645d8c3454f2c44a0203858 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.067372  465459 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt.4a433859 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt
	I1124 03:40:02.067467  465459 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key.4a433859 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key
	I1124 03:40:02.067543  465459 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key
	I1124 03:40:02.067564  465459 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt with IP's: []
	I1124 03:40:02.465004  465459 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt ...
	I1124 03:40:02.465036  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt: {Name:mkf027bf4f367183ad961bb9001139254f6258cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.465206  465459 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key ...
	I1124 03:40:02.465221  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key: {Name:mk8915392d44290b2ab552251edca0730df8ed0f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:02.465611  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:40:02.465663  465459 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:02.465681  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:40:02.465712  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:02.465746  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:02.465775  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:02.465824  465459 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:02.466427  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:02.490422  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:40:02.538618  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:02.580031  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:02.623593  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 03:40:02.657524  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:02.687220  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:02.710371  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:02.732274  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:40:02.755007  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:40:02.777653  465459 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:02.805037  465459 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:02.826328  465459 ssh_runner.go:195] Run: openssl version
	I1124 03:40:02.842808  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:40:02.861247  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.869101  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.869168  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:40:02.973780  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:02.983869  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:40:03.003344  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.014606  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.014678  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:40:03.100872  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:03.119219  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:03.132707  465459 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.143890  465459 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.143956  465459 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:03.227580  465459 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:03.241329  465459 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:03.250558  465459 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:40:03.250662  465459 kubeadm.go:401] StartCluster: {Name:no-preload-262280 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-262280 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:03.250758  465459 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:03.250841  465459 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:03.389740  465459 cri.go:89] found id: ""
	I1124 03:40:03.389818  465459 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:03.413175  465459 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:03.434949  465459 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:40:03.435019  465459 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:03.450572  465459 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:03.450591  465459 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:03.450643  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:03.481203  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:03.481293  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:03.505063  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:03.526828  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:03.526899  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:03.542273  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:03.554380  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:03.554459  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:03.565133  465459 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:03.583655  465459 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:03.583761  465459 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:03.600101  465459 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:40:03.695740  465459 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:40:03.695802  465459 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:40:03.729178  465459 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:40:03.729476  465459 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:40:03.729518  465459 kubeadm.go:319] OS: Linux
	I1124 03:40:03.729563  465459 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:40:03.729611  465459 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:40:03.729658  465459 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:40:03.729710  465459 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:40:03.729759  465459 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:40:03.729806  465459 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:40:03.729851  465459 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:40:03.729911  465459 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:40:03.729958  465459 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:40:03.847775  465459 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:40:03.847886  465459 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:40:03.847977  465459 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:40:03.860909  465459 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:40:02.325904  468607 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-818836:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (6.302044362s)
	I1124 03:40:02.325939  468607 kic.go:203] duration metric: took 6.302193098s to extract preloaded images to volume ...
	W1124 03:40:02.326078  468607 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:40:02.326190  468607 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:40:02.445610  468607 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-818836 --name embed-certs-818836 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-818836 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-818836 --network embed-certs-818836 --ip 192.168.76.2 --volume embed-certs-818836:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:40:02.830161  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Running}}
	I1124 03:40:02.858743  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:02.883367  468607 cli_runner.go:164] Run: docker exec embed-certs-818836 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:40:02.940884  468607 oci.go:144] the created container "embed-certs-818836" has a running status.
	I1124 03:40:02.940913  468607 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa...
	I1124 03:40:03.398411  468607 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:40:03.429853  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:03.464067  468607 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:40:03.464088  468607 kic_runner.go:114] Args: [docker exec --privileged embed-certs-818836 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:40:03.540196  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:03.576062  468607 machine.go:94] provisionDockerMachine start ...
	I1124 03:40:03.576168  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:03.596498  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:03.597706  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:03.597742  468607 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:40:03.598783  468607 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1124 03:40:03.865701  465459 out.go:252]   - Generating certificates and keys ...
	I1124 03:40:03.865794  465459 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:40:03.865861  465459 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:40:04.261018  465459 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:40:04.423750  465459 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:40:04.784877  465459 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:40:05.469508  465459 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:40:05.670184  465459 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:40:05.670529  465459 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-262280] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:40:05.916276  465459 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:40:05.916671  465459 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-262280] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 03:40:06.295195  465459 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:40:06.703517  465459 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:40:07.221344  465459 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:40:07.221867  465459 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:40:06.756947  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-818836
	
	I1124 03:40:06.757024  468607 ubuntu.go:182] provisioning hostname "embed-certs-818836"
	I1124 03:40:06.757117  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:06.780855  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:06.781159  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:06.781170  468607 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-818836 && echo "embed-certs-818836" | sudo tee /etc/hostname
	I1124 03:40:06.952924  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-818836
	
	I1124 03:40:06.953068  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:06.976988  468607 main.go:143] libmachine: Using SSH client type: native
	I1124 03:40:06.977313  468607 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 03:40:06.977329  468607 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-818836' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-818836/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-818836' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:40:07.145464  468607 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:40:07.145556  468607 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-255205/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-255205/.minikube}
	I1124 03:40:07.145614  468607 ubuntu.go:190] setting up certificates
	I1124 03:40:07.145642  468607 provision.go:84] configureAuth start
	I1124 03:40:07.145739  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.169212  468607 provision.go:143] copyHostCerts
	I1124 03:40:07.169290  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem, removing ...
	I1124 03:40:07.169299  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem
	I1124 03:40:07.169376  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem (1078 bytes)
	I1124 03:40:07.169475  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem, removing ...
	I1124 03:40:07.169480  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem
	I1124 03:40:07.169506  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem (1123 bytes)
	I1124 03:40:07.169572  468607 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem, removing ...
	I1124 03:40:07.169578  468607 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem
	I1124 03:40:07.169604  468607 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem (1675 bytes)
	I1124 03:40:07.169661  468607 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem org=jenkins.embed-certs-818836 san=[127.0.0.1 192.168.76.2 embed-certs-818836 localhost minikube]
	I1124 03:40:07.418050  468607 provision.go:177] copyRemoteCerts
	I1124 03:40:07.418164  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:40:07.418250  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.436857  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.541668  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:40:07.562105  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 03:40:07.582528  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 03:40:07.603626  468607 provision.go:87] duration metric: took 457.949417ms to configureAuth
	I1124 03:40:07.603697  468607 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:40:07.603915  468607 config.go:182] Loaded profile config "embed-certs-818836": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:07.603945  468607 machine.go:97] duration metric: took 4.027864554s to provisionDockerMachine
	I1124 03:40:07.603968  468607 client.go:176] duration metric: took 12.503739627s to LocalClient.Create
	I1124 03:40:07.603998  468607 start.go:167] duration metric: took 12.503833413s to libmachine.API.Create "embed-certs-818836"
	I1124 03:40:07.604072  468607 start.go:293] postStartSetup for "embed-certs-818836" (driver="docker")
	I1124 03:40:07.604107  468607 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:40:07.604203  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:40:07.604265  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.632600  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.737983  468607 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:40:07.742314  468607 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:40:07.742341  468607 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:40:07.742353  468607 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/addons for local assets ...
	I1124 03:40:07.742407  468607 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/files for local assets ...
	I1124 03:40:07.742485  468607 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem -> 2570692.pem in /etc/ssl/certs
	I1124 03:40:07.742591  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:40:07.751254  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:07.775588  468607 start.go:296] duration metric: took 171.476748ms for postStartSetup
	I1124 03:40:07.776070  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.810247  468607 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/config.json ...
	I1124 03:40:07.810536  468607 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:40:07.810584  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.829698  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:07.934319  468607 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:40:07.940379  468607 start.go:128] duration metric: took 12.845864213s to createHost
	I1124 03:40:07.940407  468607 start.go:83] releasing machines lock for "embed-certs-818836", held for 12.84601335s
	I1124 03:40:07.940518  468607 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-818836
	I1124 03:40:07.966549  468607 ssh_runner.go:195] Run: cat /version.json
	I1124 03:40:07.966614  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:07.966858  468607 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:40:07.966916  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:08.009694  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:08.010496  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:08.140825  468607 ssh_runner.go:195] Run: systemctl --version
	I1124 03:40:08.236306  468607 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:40:08.241952  468607 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:40:08.242033  468607 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:40:08.275925  468607 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:40:08.276006  468607 start.go:496] detecting cgroup driver to use...
	I1124 03:40:08.276054  468607 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:40:08.276163  468607 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:40:08.293354  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:40:08.309121  468607 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:40:08.309273  468607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:40:08.329161  468607 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:40:08.349309  468607 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:40:08.512169  468607 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:40:08.692876  468607 docker.go:234] disabling docker service ...
	I1124 03:40:08.692943  468607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:40:08.722865  468607 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:40:08.738391  468607 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:40:08.914395  468607 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:40:09.078224  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:40:09.099626  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:40:09.127201  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:40:09.137475  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:40:09.151390  468607 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 03:40:09.151466  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 03:40:09.161530  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:40:09.179218  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:40:09.188732  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:40:09.198154  468607 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:40:09.206565  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:40:09.215833  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:40:09.225156  468607 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:40:09.234765  468607 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:40:09.243300  468607 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:40:09.251671  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:09.434190  468607 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:40:09.629101  468607 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:40:09.629177  468607 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:40:09.633574  468607 start.go:564] Will wait 60s for crictl version
	I1124 03:40:09.633686  468607 ssh_runner.go:195] Run: which crictl
	I1124 03:40:09.637799  468607 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:40:09.680020  468607 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:40:09.680112  468607 ssh_runner.go:195] Run: containerd --version
	I1124 03:40:09.701052  468607 ssh_runner.go:195] Run: containerd --version
	I1124 03:40:09.728551  468607 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1124 03:40:09.731602  468607 cli_runner.go:164] Run: docker network inspect embed-certs-818836 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:40:09.752927  468607 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:40:09.757138  468607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:09.767237  468607 kubeadm.go:884] updating cluster {Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:40:09.767356  468607 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:40:09.767434  468607 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:07.945073  465459 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:40:08.356082  465459 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:40:08.704960  465459 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:40:09.943963  465459 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:40:10.216943  465459 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:40:10.218580  465459 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:40:10.237543  465459 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:40:09.801793  468607 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:40:09.801818  468607 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:40:09.801887  468607 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:40:09.828434  468607 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:40:09.828460  468607 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:40:09.828491  468607 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 03:40:09.828596  468607 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-818836 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:40:09.828666  468607 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:40:09.855719  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:40:09.855746  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:09.855754  468607 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:40:09.855777  468607 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-818836 NodeName:embed-certs-818836 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:40:09.855896  468607 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-818836"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:40:09.855970  468607 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:40:09.864082  468607 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:40:09.864155  468607 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:40:09.871799  468607 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1124 03:40:09.885236  468607 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:40:09.903151  468607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1124 03:40:09.916330  468607 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:40:09.920755  468607 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:40:09.930245  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:10.095373  468607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:10.120719  468607 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836 for IP: 192.168.76.2
	I1124 03:40:10.120751  468607 certs.go:195] generating shared ca certs ...
	I1124 03:40:10.120775  468607 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.120926  468607 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:40:10.121022  468607 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:40:10.121036  468607 certs.go:257] generating profile certs ...
	I1124 03:40:10.121101  468607 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key
	I1124 03:40:10.121117  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt with IP's: []
	I1124 03:40:10.420574  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt ...
	I1124 03:40:10.420618  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.crt: {Name:mk242703eac12cbe34e4028bdd5925f7440b86e4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.420945  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key ...
	I1124 03:40:10.420962  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/client.key: {Name:mk4f7dbe6cf87f427019f2b9bb878908f82573e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.421164  468607 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253
	I1124 03:40:10.421185  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:40:10.579421  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 ...
	I1124 03:40:10.579459  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253: {Name:mk072dbea8dc92562bf332b98a65b57fa9581398 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.579707  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253 ...
	I1124 03:40:10.579733  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253: {Name:mk3986530288979c5c9a2178817e35e45248f3c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.579920  468607 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt.e897a253 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt
	I1124 03:40:10.580110  468607 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key.e897a253 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key
	I1124 03:40:10.580235  468607 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key
	I1124 03:40:10.580282  468607 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt with IP's: []
	I1124 03:40:10.650382  468607 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt ...
	I1124 03:40:10.650422  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt: {Name:mk7002a63ade6dd6830536f0b45108488d8d2647 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.650709  468607 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key ...
	I1124 03:40:10.650730  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key: {Name:mk9ed88761ece5843396144a4fbfafba4af7e713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:10.651036  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:40:10.651117  468607 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:40:10.651134  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:40:10.651185  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:40:10.651246  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:40:10.651301  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:40:10.651375  468607 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:40:10.652050  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:40:10.674232  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:40:10.698101  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:40:10.717381  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:40:10.737149  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 03:40:10.761648  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 03:40:10.786481  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:40:10.807220  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/embed-certs-818836/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 03:40:10.827613  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:40:10.849625  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:40:10.870797  468607 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:40:10.892331  468607 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:40:10.908461  468607 ssh_runner.go:195] Run: openssl version
	I1124 03:40:10.916101  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:40:10.926608  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.931358  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.931455  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:40:10.976219  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:40:10.986375  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:40:10.996391  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.017389  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.017511  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:40:11.093548  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:40:11.109631  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:40:11.122383  468607 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.127328  468607 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.127425  468607 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:40:11.171896  468607 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:40:11.181990  468607 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:40:11.186817  468607 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:40:11.186902  468607 kubeadm.go:401] StartCluster: {Name:embed-certs-818836 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-818836 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:40:11.187015  468607 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:40:11.187107  468607 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:40:11.229657  468607 cri.go:89] found id: ""
	I1124 03:40:11.229767  468607 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:40:11.239862  468607 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:40:11.249588  468607 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:40:11.249708  468607 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:40:11.261397  468607 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:40:11.261464  468607 kubeadm.go:158] found existing configuration files:
	
	I1124 03:40:11.261537  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:40:11.271489  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:40:11.271603  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:40:11.282245  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:40:11.295430  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:40:11.295544  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:40:11.303936  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:40:11.314965  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:40:11.315086  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:40:11.322532  468607 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:40:11.331297  468607 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:40:11.331410  468607 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:40:11.339587  468607 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:40:11.388094  468607 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 03:40:11.388694  468607 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 03:40:11.418975  468607 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 03:40:11.419097  468607 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1124 03:40:11.419162  468607 kubeadm.go:319] OS: Linux
	I1124 03:40:11.419229  468607 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 03:40:11.419310  468607 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1124 03:40:11.419397  468607 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 03:40:11.419482  468607 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 03:40:11.419545  468607 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 03:40:11.419609  468607 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 03:40:11.419672  468607 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 03:40:11.419733  468607 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 03:40:11.419793  468607 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1124 03:40:11.498745  468607 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 03:40:11.498892  468607 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 03:40:11.499019  468607 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 03:40:11.505807  468607 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 03:40:10.241345  465459 out.go:252]   - Booting up control plane ...
	I1124 03:40:10.241455  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:40:10.245314  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:40:10.248607  465459 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:40:10.281242  465459 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:40:10.281374  465459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:40:10.290260  465459 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:40:10.290359  465459 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:40:10.290400  465459 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:40:10.449824  465459 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:40:10.450005  465459 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:40:11.952880  465459 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500833117s
	I1124 03:40:11.954116  465459 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:40:11.954483  465459 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1124 03:40:11.954823  465459 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:40:11.955791  465459 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:40:11.512278  468607 out.go:252]   - Generating certificates and keys ...
	I1124 03:40:11.512384  468607 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 03:40:11.512475  468607 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 03:40:12.156551  468607 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 03:40:12.440381  468607 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 03:40:13.054828  468607 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 03:40:14.412107  468607 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 03:40:17.439040  465459 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.482829056s
	I1124 03:40:14.824196  468607 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 03:40:14.824831  468607 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-818836 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:40:15.040863  468607 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 03:40:15.040998  468607 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-818836 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 03:40:15.376085  468607 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 03:40:15.719552  468607 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 03:40:16.788559  468607 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 03:40:16.789083  468607 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 03:40:17.179360  468607 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 03:40:17.589911  468607 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 03:40:18.716938  468607 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 03:40:19.434256  468607 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 03:40:19.598171  468607 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 03:40:19.599352  468607 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 03:40:19.612523  468607 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 03:40:19.615809  468607 out.go:252]   - Booting up control plane ...
	I1124 03:40:19.615923  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 03:40:19.616002  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 03:40:19.616070  468607 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 03:40:19.643244  468607 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 03:40:19.643372  468607 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 03:40:19.651919  468607 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 03:40:19.660667  468607 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 03:40:19.661493  468607 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 03:40:20.959069  465459 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.003836426s
	I1124 03:40:22.125067  465459 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.16861254s
	I1124 03:40:22.188271  465459 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:40:22.216515  465459 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:40:22.258578  465459 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:40:22.259036  465459 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-262280 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:40:22.271087  465459 kubeadm.go:319] [bootstrap-token] Using token: 2yptao.r7yd6l7ev1yowcqn
	I1124 03:40:22.274016  465459 out.go:252]   - Configuring RBAC rules ...
	I1124 03:40:22.274139  465459 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:40:22.285868  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:40:22.302245  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:40:22.309475  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:40:22.314669  465459 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:40:22.324840  465459 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:40:22.533610  465459 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:40:22.993832  465459 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:40:23.539106  465459 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:40:23.540728  465459 kubeadm.go:319] 
	I1124 03:40:23.540809  465459 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:40:23.540814  465459 kubeadm.go:319] 
	I1124 03:40:23.540891  465459 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:40:23.540895  465459 kubeadm.go:319] 
	I1124 03:40:23.540920  465459 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:40:23.541365  465459 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:40:23.541428  465459 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:40:23.541434  465459 kubeadm.go:319] 
	I1124 03:40:23.541487  465459 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:40:23.541491  465459 kubeadm.go:319] 
	I1124 03:40:23.541539  465459 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:40:23.541542  465459 kubeadm.go:319] 
	I1124 03:40:23.541594  465459 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:40:23.541669  465459 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:40:23.541737  465459 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:40:23.541741  465459 kubeadm.go:319] 
	I1124 03:40:23.542069  465459 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:40:23.542155  465459 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:40:23.542159  465459 kubeadm.go:319] 
	I1124 03:40:23.542500  465459 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2yptao.r7yd6l7ev1yowcqn \
	I1124 03:40:23.542614  465459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:40:23.542853  465459 kubeadm.go:319] 	--control-plane 
	I1124 03:40:23.542871  465459 kubeadm.go:319] 
	I1124 03:40:23.543221  465459 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:40:23.543231  465459 kubeadm.go:319] 
	I1124 03:40:23.547828  465459 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2yptao.r7yd6l7ev1yowcqn \
	I1124 03:40:23.550982  465459 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:40:23.555511  465459 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:40:23.555736  465459 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:40:23.555841  465459 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:40:23.555857  465459 cni.go:84] Creating CNI manager for ""
	I1124 03:40:23.555865  465459 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:23.559067  465459 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:40:19.836180  468607 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 03:40:19.836307  468607 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 03:40:20.837911  468607 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001791556s
	I1124 03:40:20.841824  468607 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 03:40:20.841924  468607 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 03:40:20.842025  468607 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 03:40:20.842109  468607 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 03:40:23.561962  465459 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:40:23.570649  465459 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:40:23.570666  465459 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:40:23.611043  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:40:24.448553  465459 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:40:24.448680  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:24.448750  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-262280 minikube.k8s.io/updated_at=2025_11_24T03_40_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=no-preload-262280 minikube.k8s.io/primary=true
	I1124 03:40:25.025787  465459 ops.go:34] apiserver oom_adj: -16
	I1124 03:40:25.025937  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:25.526394  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:26.025997  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:26.526754  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:27.026641  465459 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:27.253055  465459 kubeadm.go:1114] duration metric: took 2.804418537s to wait for elevateKubeSystemPrivileges
	I1124 03:40:27.253082  465459 kubeadm.go:403] duration metric: took 24.002425527s to StartCluster
	I1124 03:40:27.253101  465459 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:27.253165  465459 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:40:27.253834  465459 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:27.254034  465459 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:40:27.254180  465459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:40:27.254424  465459 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:40:27.254486  465459 addons.go:70] Setting storage-provisioner=true in profile "no-preload-262280"
	I1124 03:40:27.254500  465459 addons.go:239] Setting addon storage-provisioner=true in "no-preload-262280"
	I1124 03:40:27.254522  465459 host.go:66] Checking if "no-preload-262280" exists ...
	I1124 03:40:27.255029  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.255348  465459 config.go:182] Loaded profile config "no-preload-262280": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:27.255425  465459 addons.go:70] Setting default-storageclass=true in profile "no-preload-262280"
	I1124 03:40:27.255459  465459 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-262280"
	I1124 03:40:27.255742  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.258534  465459 out.go:179] * Verifying Kubernetes components...
	I1124 03:40:27.264721  465459 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:27.290687  465459 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 03:40:27.293638  465459 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:27.293665  465459 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:40:27.293734  465459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-262280
	I1124 03:40:27.295179  465459 addons.go:239] Setting addon default-storageclass=true in "no-preload-262280"
	I1124 03:40:27.295223  465459 host.go:66] Checking if "no-preload-262280" exists ...
	I1124 03:40:27.295646  465459 cli_runner.go:164] Run: docker container inspect no-preload-262280 --format={{.State.Status}}
	I1124 03:40:27.333873  465459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/no-preload-262280/id_rsa Username:docker}
	I1124 03:40:27.342194  465459 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:27.342217  465459 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:40:27.342282  465459 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-262280
	I1124 03:40:27.369752  465459 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/no-preload-262280/id_rsa Username:docker}
	I1124 03:40:28.289510  468607 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.446711872s
	I1124 03:40:28.718064  468607 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 7.876138727s
	I1124 03:40:28.086729  465459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:28.166898  465459 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:40:28.167031  465459 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:28.202605  465459 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:29.603255  465459 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.436193485s)
	I1124 03:40:29.604024  465459 node_ready.go:35] waiting up to 6m0s for node "no-preload-262280" to be "Ready" ...
	I1124 03:40:29.604243  465459 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.437316052s)
	I1124 03:40:29.604267  465459 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 03:40:30.149139  465459 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-262280" context rescaled to 1 replicas
	I1124 03:40:30.266899  465459 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.064217856s)
	I1124 03:40:30.272444  465459 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1124 03:40:30.843974  468607 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.002059314s
	I1124 03:40:30.870609  468607 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 03:40:30.901638  468607 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 03:40:30.924179  468607 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 03:40:30.924719  468607 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-818836 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 03:40:30.940184  468607 kubeadm.go:319] [bootstrap-token] Using token: 0bimeo.bzidkyv9i8e7nkw3
	I1124 03:40:30.943266  468607 out.go:252]   - Configuring RBAC rules ...
	I1124 03:40:30.943387  468607 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 03:40:30.951610  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 03:40:30.963677  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 03:40:30.971959  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 03:40:30.977923  468607 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 03:40:30.986249  468607 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 03:40:31.251471  468607 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 03:40:31.778202  468607 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 03:40:32.251684  468607 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 03:40:32.253477  468607 kubeadm.go:319] 
	I1124 03:40:32.253550  468607 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 03:40:32.253555  468607 kubeadm.go:319] 
	I1124 03:40:32.253632  468607 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 03:40:32.253637  468607 kubeadm.go:319] 
	I1124 03:40:32.253662  468607 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 03:40:32.254164  468607 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 03:40:32.254227  468607 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 03:40:32.254231  468607 kubeadm.go:319] 
	I1124 03:40:32.254285  468607 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 03:40:32.254288  468607 kubeadm.go:319] 
	I1124 03:40:32.254336  468607 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 03:40:32.254339  468607 kubeadm.go:319] 
	I1124 03:40:32.254391  468607 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 03:40:32.254466  468607 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 03:40:32.254534  468607 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 03:40:32.254538  468607 kubeadm.go:319] 
	I1124 03:40:32.254839  468607 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 03:40:32.254921  468607 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 03:40:32.254928  468607 kubeadm.go:319] 
	I1124 03:40:32.255259  468607 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 0bimeo.bzidkyv9i8e7nkw3 \
	I1124 03:40:32.255368  468607 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 \
	I1124 03:40:32.255600  468607 kubeadm.go:319] 	--control-plane 
	I1124 03:40:32.255610  468607 kubeadm.go:319] 
	I1124 03:40:32.255896  468607 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 03:40:32.255905  468607 kubeadm.go:319] 
	I1124 03:40:32.256198  468607 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 0bimeo.bzidkyv9i8e7nkw3 \
	I1124 03:40:32.256558  468607 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:7c8724c9df7bddf0d2f355149f7d996f734006ccfb255d81436a9364083c5f40 
	I1124 03:40:32.262002  468607 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:40:32.262227  468607 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:40:32.262331  468607 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:40:32.262347  468607 cni.go:84] Creating CNI manager for ""
	I1124 03:40:32.262355  468607 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:40:32.265575  468607 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 03:40:30.275374  465459 addons.go:530] duration metric: took 3.020937085s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1124 03:40:31.607716  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:32.268802  468607 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 03:40:32.276058  468607 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 03:40:32.276076  468607 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 03:40:32.304040  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 03:40:32.950060  468607 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 03:40:32.950194  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:32.950260  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-818836 minikube.k8s.io/updated_at=2025_11_24T03_40_32_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864 minikube.k8s.io/name=embed-certs-818836 minikube.k8s.io/primary=true
	I1124 03:40:33.247296  468607 ops.go:34] apiserver oom_adj: -16
	I1124 03:40:33.247413  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:33.747810  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:34.247563  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:34.747727  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:35.248529  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:35.747874  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:36.248065  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:36.747517  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:37.248357  468607 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 03:40:37.375914  468607 kubeadm.go:1114] duration metric: took 4.425764478s to wait for elevateKubeSystemPrivileges
	I1124 03:40:37.375948  468607 kubeadm.go:403] duration metric: took 26.189049705s to StartCluster
	I1124 03:40:37.375965  468607 settings.go:142] acquiring lock: {Name:mk06b563e5bc383cd64ed92ea3d8ac6aac195923 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:37.376029  468607 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:40:37.377428  468607 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/kubeconfig: {Name:mk59b88a9b5c6c93f7412b3f64976d4efe64bdb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:40:37.377669  468607 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:40:37.377785  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 03:40:37.378042  468607 config.go:182] Loaded profile config "embed-certs-818836": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:40:37.378089  468607 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 03:40:37.378159  468607 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-818836"
	I1124 03:40:37.378172  468607 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-818836"
	I1124 03:40:37.378198  468607 host.go:66] Checking if "embed-certs-818836" exists ...
	I1124 03:40:37.378697  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.378976  468607 addons.go:70] Setting default-storageclass=true in profile "embed-certs-818836"
	I1124 03:40:37.379003  468607 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-818836"
	I1124 03:40:37.379254  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.381419  468607 out.go:179] * Verifying Kubernetes components...
	I1124 03:40:37.384428  468607 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:40:37.421715  468607 addons.go:239] Setting addon default-storageclass=true in "embed-certs-818836"
	I1124 03:40:37.421763  468607 host.go:66] Checking if "embed-certs-818836" exists ...
	I1124 03:40:37.422190  468607 cli_runner.go:164] Run: docker container inspect embed-certs-818836 --format={{.State.Status}}
	I1124 03:40:37.443094  468607 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1124 03:40:34.107205  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	W1124 03:40:36.107495  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:37.445972  468607 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:37.445995  468607 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 03:40:37.446062  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:37.468083  468607 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:37.468107  468607 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 03:40:37.468173  468607 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-818836
	I1124 03:40:37.505843  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:37.512810  468607 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/embed-certs-818836/id_rsa Username:docker}
	I1124 03:40:37.807453  468607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 03:40:37.824901  468607 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:40:37.825083  468607 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 03:40:37.844459  468607 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 03:40:38.592240  468607 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1124 03:40:38.594605  468607 node_ready.go:35] waiting up to 6m0s for node "embed-certs-818836" to be "Ready" ...
	I1124 03:40:38.651892  468607 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 03:40:38.655002  468607 addons.go:530] duration metric: took 1.276905995s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 03:40:39.096916  468607 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-818836" context rescaled to 1 replicas
	W1124 03:40:38.606995  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	W1124 03:40:40.607344  465459 node_ready.go:57] node "no-preload-262280" has "Ready":"False" status (will retry)
	I1124 03:40:42.608225  465459 node_ready.go:49] node "no-preload-262280" is "Ready"
	I1124 03:40:42.608272  465459 node_ready.go:38] duration metric: took 13.004210314s for node "no-preload-262280" to be "Ready" ...
	I1124 03:40:42.608287  465459 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:40:42.608350  465459 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:42.623406  465459 api_server.go:72] duration metric: took 15.369343221s to wait for apiserver process to appear ...
	I1124 03:40:42.623436  465459 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:40:42.623469  465459 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1124 03:40:42.633313  465459 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1124 03:40:42.634411  465459 api_server.go:141] control plane version: v1.34.1
	I1124 03:40:42.634433  465459 api_server.go:131] duration metric: took 10.990663ms to wait for apiserver health ...
	I1124 03:40:42.634442  465459 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:40:42.638347  465459 system_pods.go:59] 8 kube-system pods found
	I1124 03:40:42.638381  465459 system_pods.go:61] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.638387  465459 system_pods.go:61] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.638392  465459 system_pods.go:61] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.638396  465459 system_pods.go:61] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.638401  465459 system_pods.go:61] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.638404  465459 system_pods.go:61] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.638407  465459 system_pods.go:61] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.638413  465459 system_pods.go:61] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:42.638420  465459 system_pods.go:74] duration metric: took 3.972643ms to wait for pod list to return data ...
	I1124 03:40:42.638431  465459 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:40:42.641761  465459 default_sa.go:45] found service account: "default"
	I1124 03:40:42.641824  465459 default_sa.go:55] duration metric: took 3.386704ms for default service account to be created ...
	I1124 03:40:42.641868  465459 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:40:42.645101  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:42.645134  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.645141  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.645147  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.645155  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.645160  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.645164  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.645168  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.645173  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:42.645193  465459 retry.go:31] will retry after 242.077653ms: missing components: kube-dns
	I1124 03:40:42.893628  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:42.893678  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:42.893684  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:42.893699  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:42.893704  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:42.893709  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:42.893713  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:42.893716  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:42.893720  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:42.893822  465459 retry.go:31] will retry after 373.532935ms: missing components: kube-dns
	W1124 03:40:40.597355  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:42.597817  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:44.598213  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	I1124 03:40:43.271122  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:43.271161  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:43.271172  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:43.271178  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:43.271182  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:43.271187  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:43.271191  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:43.271195  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:43.271206  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:43.271221  465459 retry.go:31] will retry after 322.6325ms: missing components: kube-dns
	I1124 03:40:43.599918  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:43.600007  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:43.600023  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:43.600030  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:43.600035  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:43.600040  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:43.600044  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:43.600048  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:43.600051  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:43.600066  465459 retry.go:31] will retry after 394.949668ms: missing components: kube-dns
	I1124 03:40:44.001892  465459 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:44.001938  465459 system_pods.go:89] "coredns-66bc5c9577-mj9gd" [875322e9-dddd-4618-beec-76c737d16e3c] Running
	I1124 03:40:44.001946  465459 system_pods.go:89] "etcd-no-preload-262280" [d8231412-ad4c-46d0-943b-8af2c2277c0b] Running
	I1124 03:40:44.001952  465459 system_pods.go:89] "kindnet-tp8zg" [8b8b163b-5585-4d91-9717-95f656987530] Running
	I1124 03:40:44.001960  465459 system_pods.go:89] "kube-apiserver-no-preload-262280" [dde3aff0-bd29-4aa8-8fd6-d9b69d6ffdce] Running
	I1124 03:40:44.001965  465459 system_pods.go:89] "kube-controller-manager-no-preload-262280" [7a9d30e0-564d-477a-9415-81465fb30a55] Running
	I1124 03:40:44.001968  465459 system_pods.go:89] "kube-proxy-xg8w4" [e8388de5-8f36-444e-864f-efe3b946972c] Running
	I1124 03:40:44.001972  465459 system_pods.go:89] "kube-scheduler-no-preload-262280" [5c1d5b54-7d17-49d6-98fe-48b1699769d9] Running
	I1124 03:40:44.001976  465459 system_pods.go:89] "storage-provisioner" [430685c9-d2cd-4da8-90bb-666070ea7af5] Running
	I1124 03:40:44.001989  465459 system_pods.go:126] duration metric: took 1.36009666s to wait for k8s-apps to be running ...
	I1124 03:40:44.001998  465459 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:40:44.002065  465459 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:40:44.023562  465459 system_svc.go:56] duration metric: took 21.553336ms WaitForService to wait for kubelet
	I1124 03:40:44.023598  465459 kubeadm.go:587] duration metric: took 16.769539879s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:40:44.023618  465459 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:40:44.027009  465459 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:40:44.027046  465459 node_conditions.go:123] node cpu capacity is 2
	I1124 03:40:44.027060  465459 node_conditions.go:105] duration metric: took 3.437042ms to run NodePressure ...
	I1124 03:40:44.027074  465459 start.go:242] waiting for startup goroutines ...
	I1124 03:40:44.027110  465459 start.go:247] waiting for cluster config update ...
	I1124 03:40:44.027129  465459 start.go:256] writing updated cluster config ...
	I1124 03:40:44.027439  465459 ssh_runner.go:195] Run: rm -f paused
	I1124 03:40:44.032809  465459 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:44.036889  465459 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mj9gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.042142  465459 pod_ready.go:94] pod "coredns-66bc5c9577-mj9gd" is "Ready"
	I1124 03:40:44.042172  465459 pod_ready.go:86] duration metric: took 5.207096ms for pod "coredns-66bc5c9577-mj9gd" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.044894  465459 pod_ready.go:83] waiting for pod "etcd-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.050138  465459 pod_ready.go:94] pod "etcd-no-preload-262280" is "Ready"
	I1124 03:40:44.050222  465459 pod_ready.go:86] duration metric: took 5.300135ms for pod "etcd-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.052994  465459 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.057831  465459 pod_ready.go:94] pod "kube-apiserver-no-preload-262280" is "Ready"
	I1124 03:40:44.057868  465459 pod_ready.go:86] duration metric: took 4.8387ms for pod "kube-apiserver-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.060783  465459 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.437093  465459 pod_ready.go:94] pod "kube-controller-manager-no-preload-262280" is "Ready"
	I1124 03:40:44.437124  465459 pod_ready.go:86] duration metric: took 376.313274ms for pod "kube-controller-manager-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:44.637747  465459 pod_ready.go:83] waiting for pod "kube-proxy-xg8w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.042982  465459 pod_ready.go:94] pod "kube-proxy-xg8w4" is "Ready"
	I1124 03:40:45.043021  465459 pod_ready.go:86] duration metric: took 405.246191ms for pod "kube-proxy-xg8w4" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.238605  465459 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.636771  465459 pod_ready.go:94] pod "kube-scheduler-no-preload-262280" is "Ready"
	I1124 03:40:45.636842  465459 pod_ready.go:86] duration metric: took 398.208005ms for pod "kube-scheduler-no-preload-262280" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:45.636877  465459 pod_ready.go:40] duration metric: took 1.604024878s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:45.700045  465459 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:40:45.703311  465459 out.go:179] * Done! kubectl is now configured to use "no-preload-262280" cluster and "default" namespace by default
	W1124 03:40:47.097978  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	W1124 03:40:49.098467  468607 node_ready.go:57] node "embed-certs-818836" has "Ready":"False" status (will retry)
	I1124 03:40:49.600289  468607 node_ready.go:49] node "embed-certs-818836" is "Ready"
	I1124 03:40:49.600325  468607 node_ready.go:38] duration metric: took 11.005685237s for node "embed-certs-818836" to be "Ready" ...
	I1124 03:40:49.600342  468607 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:40:49.600401  468607 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:40:49.616102  468607 api_server.go:72] duration metric: took 12.238396901s to wait for apiserver process to appear ...
	I1124 03:40:49.616131  468607 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:40:49.616151  468607 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 03:40:49.625663  468607 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1124 03:40:49.628248  468607 api_server.go:141] control plane version: v1.34.1
	I1124 03:40:49.628298  468607 api_server.go:131] duration metric: took 12.158646ms to wait for apiserver health ...
	I1124 03:40:49.628308  468607 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:40:49.635456  468607 system_pods.go:59] 8 kube-system pods found
	I1124 03:40:49.635501  468607 system_pods.go:61] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.635509  468607 system_pods.go:61] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.635527  468607 system_pods.go:61] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.635531  468607 system_pods.go:61] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.635536  468607 system_pods.go:61] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.635542  468607 system_pods.go:61] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.635546  468607 system_pods.go:61] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.635559  468607 system_pods.go:61] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.635566  468607 system_pods.go:74] duration metric: took 7.25158ms to wait for pod list to return data ...
	I1124 03:40:49.635579  468607 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:40:49.639861  468607 default_sa.go:45] found service account: "default"
	I1124 03:40:49.639903  468607 default_sa.go:55] duration metric: took 4.317754ms for default service account to be created ...
	I1124 03:40:49.639914  468607 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:40:49.642908  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:49.642943  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.642950  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.642956  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.642961  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.642975  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.642979  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.642984  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.642992  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.643018  468607 retry.go:31] will retry after 271.674831ms: missing components: kube-dns
	I1124 03:40:49.919376  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:49.919415  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:49.919423  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:49.919429  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:49.919435  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:49.919440  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:49.919444  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:49.919448  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:49.919455  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:49.919474  468607 retry.go:31] will retry after 335.268613ms: missing components: kube-dns
	I1124 03:40:50.262160  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:50.262218  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:50.262226  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:50.262264  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:50.262281  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:50.262290  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:50.262298  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:50.262302  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:50.262312  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:50.262349  468607 retry.go:31] will retry after 385.617551ms: missing components: kube-dns
	I1124 03:40:50.651970  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:50.652010  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:40:50.652018  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:50.652025  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:50.652030  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:50.652034  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:50.652038  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:50.652041  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:50.652047  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:40:50.652064  468607 retry.go:31] will retry after 470.580451ms: missing components: kube-dns
	I1124 03:40:51.133462  468607 system_pods.go:86] 8 kube-system pods found
	I1124 03:40:51.133497  468607 system_pods.go:89] "coredns-66bc5c9577-dgvvg" [0ef9d488-59a5-4f43-9832-c97f1c895bdd] Running
	I1124 03:40:51.133504  468607 system_pods.go:89] "etcd-embed-certs-818836" [7c4b7c70-dd30-4dd9-a5a9-e8e38d7d1cd8] Running
	I1124 03:40:51.133509  468607 system_pods.go:89] "kindnet-fxtfb" [5f021efe-9818-47f9-9567-504428fa8b11] Running
	I1124 03:40:51.133514  468607 system_pods.go:89] "kube-apiserver-embed-certs-818836" [a39c4358-09f6-4df9-8fc0-7643b085030a] Running
	I1124 03:40:51.133518  468607 system_pods.go:89] "kube-controller-manager-embed-certs-818836" [b3ce0380-0c30-4b21-a4d1-385461fd8e7b] Running
	I1124 03:40:51.133528  468607 system_pods.go:89] "kube-proxy-kqtwg" [a89f17a9-6fd2-47fd-b106-b177e8575a6a] Running
	I1124 03:40:51.133533  468607 system_pods.go:89] "kube-scheduler-embed-certs-818836" [15355629-1648-4fb6-8e9b-0787e422aaa4] Running
	I1124 03:40:51.133538  468607 system_pods.go:89] "storage-provisioner" [b0205ba1-f93d-444f-88a8-2d4eec603213] Running
	I1124 03:40:51.133558  468607 system_pods.go:126] duration metric: took 1.493636996s to wait for k8s-apps to be running ...
	I1124 03:40:51.133566  468607 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:40:51.133625  468607 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:40:51.151193  468607 system_svc.go:56] duration metric: took 17.617707ms WaitForService to wait for kubelet
	I1124 03:40:51.151222  468607 kubeadm.go:587] duration metric: took 13.773521156s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:40:51.151242  468607 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:40:51.158998  468607 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:40:51.159035  468607 node_conditions.go:123] node cpu capacity is 2
	I1124 03:40:51.159163  468607 node_conditions.go:105] duration metric: took 7.914387ms to run NodePressure ...
	I1124 03:40:51.159180  468607 start.go:242] waiting for startup goroutines ...
	I1124 03:40:51.159201  468607 start.go:247] waiting for cluster config update ...
	I1124 03:40:51.159225  468607 start.go:256] writing updated cluster config ...
	I1124 03:40:51.159566  468607 ssh_runner.go:195] Run: rm -f paused
	I1124 03:40:51.163938  468607 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:51.233364  468607 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dgvvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.238633  468607 pod_ready.go:94] pod "coredns-66bc5c9577-dgvvg" is "Ready"
	I1124 03:40:51.238668  468607 pod_ready.go:86] duration metric: took 5.226756ms for pod "coredns-66bc5c9577-dgvvg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.242048  468607 pod_ready.go:83] waiting for pod "etcd-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.247506  468607 pod_ready.go:94] pod "etcd-embed-certs-818836" is "Ready"
	I1124 03:40:51.247534  468607 pod_ready.go:86] duration metric: took 5.457921ms for pod "etcd-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.250505  468607 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.256168  468607 pod_ready.go:94] pod "kube-apiserver-embed-certs-818836" is "Ready"
	I1124 03:40:51.256200  468607 pod_ready.go:86] duration metric: took 5.665265ms for pod "kube-apiserver-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.258827  468607 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.568969  468607 pod_ready.go:94] pod "kube-controller-manager-embed-certs-818836" is "Ready"
	I1124 03:40:51.568996  468607 pod_ready.go:86] duration metric: took 310.144443ms for pod "kube-controller-manager-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:51.768346  468607 pod_ready.go:83] waiting for pod "kube-proxy-kqtwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.168601  468607 pod_ready.go:94] pod "kube-proxy-kqtwg" is "Ready"
	I1124 03:40:52.168630  468607 pod_ready.go:86] duration metric: took 400.250484ms for pod "kube-proxy-kqtwg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.369520  468607 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.768587  468607 pod_ready.go:94] pod "kube-scheduler-embed-certs-818836" is "Ready"
	I1124 03:40:52.768616  468607 pod_ready.go:86] duration metric: took 399.065879ms for pod "kube-scheduler-embed-certs-818836" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:40:52.768629  468607 pod_ready.go:40] duration metric: took 1.604655617s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:40:52.832190  468607 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:40:52.835417  468607 out.go:179] * Done! kubectl is now configured to use "embed-certs-818836" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a58a4728ac10f       1611cd07b61d5       10 seconds ago      Running             busybox                   0                   0308b01a7a26f       busybox                                      default
	6260374b03f86       138784d87c9c5       16 seconds ago      Running             coredns                   0                   2458228456a3b       coredns-66bc5c9577-dgvvg                     kube-system
	bdaea43dac204       ba04bb24b9575       16 seconds ago      Running             storage-provisioner       0                   8152ad9444328       storage-provisioner                          kube-system
	466fe30e398c2       b1a8c6f707935       27 seconds ago      Running             kindnet-cni               0                   54530eb20f030       kindnet-fxtfb                                kube-system
	4d1ac5a789d22       05baa95f5142d       28 seconds ago      Running             kube-proxy                0                   2c4ac076c25c1       kube-proxy-kqtwg                             kube-system
	a59e80e4497b4       b5f57ec6b9867       44 seconds ago      Running             kube-scheduler            0                   b5e98495343e1       kube-scheduler-embed-certs-818836            kube-system
	06008282a01c0       43911e833d64d       44 seconds ago      Running             kube-apiserver            0                   74f1dfbc093ce       kube-apiserver-embed-certs-818836            kube-system
	6b9c388047cfa       7eb2c6ff0c5a7       44 seconds ago      Running             kube-controller-manager   0                   42ade8eb3674e       kube-controller-manager-embed-certs-818836   kube-system
	ac1d217ae9676       a1894772a478e       45 seconds ago      Running             etcd                      0                   db52836f67dc1       etcd-embed-certs-818836                      kube-system
	
	
	==> containerd <==
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.057907964Z" level=info msg="connecting to shim bdaea43dac204948bdf28895d9cb5bdf2db2c74e81ace882300ed5718f87add6" address="unix:///run/containerd/s/57f2a81bbcac182bad45dbeab33a1327cfff18ad488d5ce902af0bfdf0e7bc5e" protocol=ttrpc version=3
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.100785614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-dgvvg,Uid:0ef9d488-59a5-4f43-9832-c97f1c895bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2458228456a3be9fe22927f01f937d8dff13347bb763bfbef568481f4f4b7b5c\""
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.109653385Z" level=info msg="CreateContainer within sandbox \"2458228456a3be9fe22927f01f937d8dff13347bb763bfbef568481f4f4b7b5c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.122564109Z" level=info msg="Container 6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.135428844Z" level=info msg="CreateContainer within sandbox \"2458228456a3be9fe22927f01f937d8dff13347bb763bfbef568481f4f4b7b5c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7\""
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.136416140Z" level=info msg="StartContainer for \"6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7\""
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.137607113Z" level=info msg="connecting to shim 6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7" address="unix:///run/containerd/s/a4e37a1f6b438936ba3f660d2ba31a71943470dca57caaebd80ed68edb829e37" protocol=ttrpc version=3
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.198426517Z" level=info msg="StartContainer for \"bdaea43dac204948bdf28895d9cb5bdf2db2c74e81ace882300ed5718f87add6\" returns successfully"
	Nov 24 03:40:50 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:50.230648924Z" level=info msg="StartContainer for \"6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7\" returns successfully"
	Nov 24 03:40:53 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:53.383842754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:558523a2-89e3-43af-9d9f-326d9e1d9629,Namespace:default,Attempt:0,}"
	Nov 24 03:40:53 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:53.447848870Z" level=info msg="connecting to shim 0308b01a7a26fea59abf7edb5f2a7031f830ee4e945d7900726ad1d0604c1492" address="unix:///run/containerd/s/23d48a1f08b22aec323cafe57c4c4fb059dde661aeeebebe704b7840c9169c9c" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:40:53 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:53.499028110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:558523a2-89e3-43af-9d9f-326d9e1d9629,Namespace:default,Attempt:0,} returns sandbox id \"0308b01a7a26fea59abf7edb5f2a7031f830ee4e945d7900726ad1d0604c1492\""
	Nov 24 03:40:53 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:53.504031586Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.531248955Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.535067333Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.537704928Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.541782319Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.543332597Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.039101863s"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.543388548Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.559734707Z" level=info msg="CreateContainer within sandbox \"0308b01a7a26fea59abf7edb5f2a7031f830ee4e945d7900726ad1d0604c1492\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.573406183Z" level=info msg="Container a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.585552965Z" level=info msg="CreateContainer within sandbox \"0308b01a7a26fea59abf7edb5f2a7031f830ee4e945d7900726ad1d0604c1492\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9\""
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.586502788Z" level=info msg="StartContainer for \"a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9\""
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.588716955Z" level=info msg="connecting to shim a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9" address="unix:///run/containerd/s/23d48a1f08b22aec323cafe57c4c4fb059dde661aeeebebe704b7840c9169c9c" protocol=ttrpc version=3
	Nov 24 03:40:55 embed-certs-818836 containerd[759]: time="2025-11-24T03:40:55.676448344Z" level=info msg="StartContainer for \"a58a4728ac10f75bdaebf884535abef5a37cfccdf62f946c782091daf81530e9\" returns successfully"
	
	
	==> coredns [6260374b03f86e39ab59000e7b2b68b3a38adb7f0bffd6437f9a695e019324a7] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43956 - 37230 "HINFO IN 739609537041384603.8632231235251508514. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.079132064s
	
	
	==> describe nodes <==
	Name:               embed-certs-818836
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-818836
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=embed-certs-818836
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_40_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:40:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-818836
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:41:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:41:02 +0000   Mon, 24 Nov 2025 03:40:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:41:02 +0000   Mon, 24 Nov 2025 03:40:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:41:02 +0000   Mon, 24 Nov 2025 03:40:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:41:02 +0000   Mon, 24 Nov 2025 03:40:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-818836
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                1beb3fc5-b491-4e20-a9b9-ad38a1b35e92
	  Boot ID:                    63a8a852-1462-44b1-9d6f-f77d26e8568f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-dgvvg                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-embed-certs-818836                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-fxtfb                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-embed-certs-818836             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-embed-certs-818836    200m (10%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-kqtwg                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-embed-certs-818836             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node embed-certs-818836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node embed-certs-818836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     46s (x7 over 46s)  kubelet          Node embed-certs-818836 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  46s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  34s                kubelet          Node embed-certs-818836 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    34s                kubelet          Node embed-certs-818836 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     34s                kubelet          Node embed-certs-818836 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           30s                node-controller  Node embed-certs-818836 event: Registered Node embed-certs-818836 in Controller
	  Normal   NodeReady                17s                kubelet          Node embed-certs-818836 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:27] overlayfs: idmapped layers are currently not supported
	[Nov24 02:28] overlayfs: idmapped layers are currently not supported
	[Nov24 02:30] overlayfs: idmapped layers are currently not supported
	[  +9.824160] overlayfs: idmapped layers are currently not supported
	[Nov24 02:31] overlayfs: idmapped layers are currently not supported
	[Nov24 02:32] overlayfs: idmapped layers are currently not supported
	[ +27.981383] overlayfs: idmapped layers are currently not supported
	[Nov24 02:33] overlayfs: idmapped layers are currently not supported
	[Nov24 02:34] overlayfs: idmapped layers are currently not supported
	[Nov24 02:35] overlayfs: idmapped layers are currently not supported
	[Nov24 02:36] overlayfs: idmapped layers are currently not supported
	[Nov24 02:37] overlayfs: idmapped layers are currently not supported
	[Nov24 02:38] overlayfs: idmapped layers are currently not supported
	[Nov24 02:39] overlayfs: idmapped layers are currently not supported
	[ +24.837346] overlayfs: idmapped layers are currently not supported
	[Nov24 02:40] overlayfs: idmapped layers are currently not supported
	[ +40.823948] overlayfs: idmapped layers are currently not supported
	[  +1.705989] overlayfs: idmapped layers are currently not supported
	[Nov24 02:42] overlayfs: idmapped layers are currently not supported
	[ +21.661904] overlayfs: idmapped layers are currently not supported
	[Nov24 02:44] overlayfs: idmapped layers are currently not supported
	[  +1.074777] overlayfs: idmapped layers are currently not supported
	[Nov24 02:46] overlayfs: idmapped layers are currently not supported
	[ +19.120392] overlayfs: idmapped layers are currently not supported
	[Nov24 02:48] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [ac1d217ae967618cbe817fd20ce47ce5cb82bbe446e86ca4529a98da239abdf7] <==
	{"level":"warn","ts":"2025-11-24T03:40:25.873553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.919807Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.921154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.941933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.961019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.976792Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:25.994961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.021776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.041879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.070442Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.080946Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.118876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49072","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.146734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.186759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.224841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.271473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.298803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.388399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.423077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.452751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.474020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.497747Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.524876Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.580699Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:40:26.733470Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49294","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:41:06 up  2:23,  0 user,  load average: 4.84, 3.81, 3.05
	Linux embed-certs-818836 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [466fe30e398c25b51e46fe99b224055705f1cf68fe2bb27f8a8daa065373d23d] <==
	I1124 03:40:39.230199       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:40:39.230426       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1124 03:40:39.230553       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:40:39.230564       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:40:39.230578       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:40:39Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:40:39.432550       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:40:39.432731       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:40:39.432778       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:40:39.434006       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 03:40:39.733771       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:40:39.733800       1 metrics.go:72] Registering metrics
	I1124 03:40:39.734036       1 controller.go:711] "Syncing nftables rules"
	I1124 03:40:49.436805       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:40:49.436882       1 main.go:301] handling current node
	I1124 03:40:59.432570       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1124 03:40:59.432618       1 main.go:301] handling current node
	
	
	==> kube-apiserver [06008282a01c0b88fab50226602b3a4cc42c51fa3b8c8cee4a7d3d29f430950a] <==
	I1124 03:40:28.614790       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 03:40:28.618426       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 03:40:28.683666       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:28.689620       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:40:28.724556       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:40:28.771948       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:40:28.788976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:28.789247       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:40:29.313030       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:40:29.345951       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:40:29.346136       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:40:30.676322       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:40:30.745285       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:40:30.826682       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:40:30.836003       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1124 03:40:30.837380       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:40:30.852808       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:40:31.449556       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:40:31.721331       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:40:31.776793       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:40:31.801120       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:40:36.754712       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 03:40:37.425269       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:40:37.577618       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:40:37.588850       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [6b9c388047cfaf63599101a93c74576aad5ddfbe42d36bc9d2587f8610c0b185] <==
	I1124 03:40:36.493298       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 03:40:36.494421       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 03:40:36.494437       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 03:40:36.494800       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 03:40:36.495009       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-818836"
	I1124 03:40:36.495160       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 03:40:36.495260       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:40:36.495342       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 03:40:36.495632       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:40:36.495662       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:40:36.496094       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 03:40:36.496408       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:40:36.496595       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:40:36.496719       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:40:36.497479       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 03:40:36.498367       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 03:40:36.500910       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 03:40:36.501127       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:40:36.504598       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:36.510959       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 03:40:36.512065       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 03:40:36.528541       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:40:36.528569       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:40:36.528578       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:40:51.497321       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [4d1ac5a789d22eca3c9aec74f820ba93b3ff927e4ac76703af976882df0f285e] <==
	I1124 03:40:38.210327       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:40:38.309570       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:40:38.409726       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:40:38.409774       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1124 03:40:38.409907       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:40:38.454951       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:40:38.455010       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:40:38.459511       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:40:38.459862       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:40:38.459876       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:40:38.461544       1 config.go:200] "Starting service config controller"
	I1124 03:40:38.461555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:40:38.461572       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:40:38.461577       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:40:38.461589       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:40:38.461593       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:40:38.466326       1 config.go:309] "Starting node config controller"
	I1124 03:40:38.466511       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:40:38.466609       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:40:38.563656       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:40:38.564600       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:40:38.564630       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a59e80e4497b4929b98a64a841dc410b4ba2a701446d53829e16139bc9d77a8b] <==
	E1124 03:40:28.733045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:40:28.733299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:40:28.733554       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:40:28.733988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:40:28.734185       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:40:28.734353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:40:28.734648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:40:29.558668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 03:40:29.593259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:40:29.678555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:40:29.678978       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:40:29.696638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:40:29.712980       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:40:29.950771       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 03:40:29.975983       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:40:29.980747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:40:30.007688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:40:30.139616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:40:30.139697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:40:30.139759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:40:30.143540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:40:30.143618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:40:30.166069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:40:30.187032       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	I1124 03:40:32.276815       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.832544    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f021efe-9818-47f9-9567-504428fa8b11-lib-modules\") pod \"kindnet-fxtfb\" (UID: \"5f021efe-9818-47f9-9567-504428fa8b11\") " pod="kube-system/kindnet-fxtfb"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.932983    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a89f17a9-6fd2-47fd-b106-b177e8575a6a-kube-proxy\") pod \"kube-proxy-kqtwg\" (UID: \"a89f17a9-6fd2-47fd-b106-b177e8575a6a\") " pod="kube-system/kube-proxy-kqtwg"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.933046    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a89f17a9-6fd2-47fd-b106-b177e8575a6a-xtables-lock\") pod \"kube-proxy-kqtwg\" (UID: \"a89f17a9-6fd2-47fd-b106-b177e8575a6a\") " pod="kube-system/kube-proxy-kqtwg"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.933097    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a89f17a9-6fd2-47fd-b106-b177e8575a6a-lib-modules\") pod \"kube-proxy-kqtwg\" (UID: \"a89f17a9-6fd2-47fd-b106-b177e8575a6a\") " pod="kube-system/kube-proxy-kqtwg"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: I1124 03:40:36.933138    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8sh2\" (UniqueName: \"kubernetes.io/projected/a89f17a9-6fd2-47fd-b106-b177e8575a6a-kube-api-access-m8sh2\") pod \"kube-proxy-kqtwg\" (UID: \"a89f17a9-6fd2-47fd-b106-b177e8575a6a\") " pod="kube-system/kube-proxy-kqtwg"
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: E1124 03:40:36.942493    1470 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: E1124 03:40:36.942541    1470 projected.go:196] Error preparing data for projected volume kube-api-access-xm5rz for pod kube-system/kindnet-fxtfb: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:36 embed-certs-818836 kubelet[1470]: E1124 03:40:36.943761    1470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f021efe-9818-47f9-9567-504428fa8b11-kube-api-access-xm5rz podName:5f021efe-9818-47f9-9567-504428fa8b11 nodeName:}" failed. No retries permitted until 2025-11-24 03:40:37.443702896 +0000 UTC m=+5.803124863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xm5rz" (UniqueName: "kubernetes.io/projected/5f021efe-9818-47f9-9567-504428fa8b11-kube-api-access-xm5rz") pod "kindnet-fxtfb" (UID: "5f021efe-9818-47f9-9567-504428fa8b11") : configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.044030    1470 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.044068    1470 projected.go:196] Error preparing data for projected volume kube-api-access-m8sh2 for pod kube-system/kube-proxy-kqtwg: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.044178    1470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a89f17a9-6fd2-47fd-b106-b177e8575a6a-kube-api-access-m8sh2 podName:a89f17a9-6fd2-47fd-b106-b177e8575a6a nodeName:}" failed. No retries permitted until 2025-11-24 03:40:37.544155017 +0000 UTC m=+5.903577009 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-m8sh2" (UniqueName: "kubernetes.io/projected/a89f17a9-6fd2-47fd-b106-b177e8575a6a-kube-api-access-m8sh2") pod "kube-proxy-kqtwg" (UID: "a89f17a9-6fd2-47fd-b106-b177e8575a6a") : configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.538454    1470 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.538485    1470 projected.go:196] Error preparing data for projected volume kube-api-access-xm5rz for pod kube-system/kindnet-fxtfb: configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: E1124 03:40:37.538570    1470 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5f021efe-9818-47f9-9567-504428fa8b11-kube-api-access-xm5rz podName:5f021efe-9818-47f9-9567-504428fa8b11 nodeName:}" failed. No retries permitted until 2025-11-24 03:40:38.538550273 +0000 UTC m=+6.897972248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-xm5rz" (UniqueName: "kubernetes.io/projected/5f021efe-9818-47f9-9567-504428fa8b11-kube-api-access-xm5rz") pod "kindnet-fxtfb" (UID: "5f021efe-9818-47f9-9567-504428fa8b11") : configmap "kube-root-ca.crt" not found
	Nov 24 03:40:37 embed-certs-818836 kubelet[1470]: I1124 03:40:37.643727    1470 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 03:40:39 embed-certs-818836 kubelet[1470]: I1124 03:40:39.013629    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kqtwg" podStartSLOduration=3.013602252 podStartE2EDuration="3.013602252s" podCreationTimestamp="2025-11-24 03:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:39.013368814 +0000 UTC m=+7.372790789" watchObservedRunningTime="2025-11-24 03:40:39.013602252 +0000 UTC m=+7.373024218"
	Nov 24 03:40:40 embed-certs-818836 kubelet[1470]: I1124 03:40:40.064696    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fxtfb" podStartSLOduration=4.06467528 podStartE2EDuration="4.06467528s" podCreationTimestamp="2025-11-24 03:40:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:40.061533017 +0000 UTC m=+8.420954992" watchObservedRunningTime="2025-11-24 03:40:40.06467528 +0000 UTC m=+8.424097255"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.526399    1470 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.648175    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgp8j\" (UniqueName: \"kubernetes.io/projected/0ef9d488-59a5-4f43-9832-c97f1c895bdd-kube-api-access-cgp8j\") pod \"coredns-66bc5c9577-dgvvg\" (UID: \"0ef9d488-59a5-4f43-9832-c97f1c895bdd\") " pod="kube-system/coredns-66bc5c9577-dgvvg"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.648240    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/b0205ba1-f93d-444f-88a8-2d4eec603213-tmp\") pod \"storage-provisioner\" (UID: \"b0205ba1-f93d-444f-88a8-2d4eec603213\") " pod="kube-system/storage-provisioner"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.648264    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv5dr\" (UniqueName: \"kubernetes.io/projected/b0205ba1-f93d-444f-88a8-2d4eec603213-kube-api-access-zv5dr\") pod \"storage-provisioner\" (UID: \"b0205ba1-f93d-444f-88a8-2d4eec603213\") " pod="kube-system/storage-provisioner"
	Nov 24 03:40:49 embed-certs-818836 kubelet[1470]: I1124 03:40:49.648286    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ef9d488-59a5-4f43-9832-c97f1c895bdd-config-volume\") pod \"coredns-66bc5c9577-dgvvg\" (UID: \"0ef9d488-59a5-4f43-9832-c97f1c895bdd\") " pod="kube-system/coredns-66bc5c9577-dgvvg"
	Nov 24 03:40:51 embed-certs-818836 kubelet[1470]: I1124 03:40:51.107730    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.107708724 podStartE2EDuration="13.107708724s" podCreationTimestamp="2025-11-24 03:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:51.088433727 +0000 UTC m=+19.447855694" watchObservedRunningTime="2025-11-24 03:40:51.107708724 +0000 UTC m=+19.467130691"
	Nov 24 03:40:51 embed-certs-818836 kubelet[1470]: I1124 03:40:51.107857    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dgvvg" podStartSLOduration=14.107850403 podStartE2EDuration="14.107850403s" podCreationTimestamp="2025-11-24 03:40:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:40:51.106664074 +0000 UTC m=+19.466086049" watchObservedRunningTime="2025-11-24 03:40:51.107850403 +0000 UTC m=+19.467272378"
	Nov 24 03:40:53 embed-certs-818836 kubelet[1470]: I1124 03:40:53.170547    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnt4g\" (UniqueName: \"kubernetes.io/projected/558523a2-89e3-43af-9d9f-326d9e1d9629-kube-api-access-cnt4g\") pod \"busybox\" (UID: \"558523a2-89e3-43af-9d9f-326d9e1d9629\") " pod="default/busybox"
	
	
	==> storage-provisioner [bdaea43dac204948bdf28895d9cb5bdf2db2c74e81ace882300ed5718f87add6] <==
	I1124 03:40:50.266726       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:40:50.283251       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:50.293273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:40:50.293652       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:40:50.294024       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-818836_9f050f1e-62b2-4d60-af55-6500e2d54406!
	I1124 03:40:50.295422       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"1d88f2cb-63eb-466e-8cde-49b8ebb184fc", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-818836_9f050f1e-62b2-4d60-af55-6500e2d54406 became leader
	W1124 03:40:50.296129       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:50.307875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:40:50.394893       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-818836_9f050f1e-62b2-4d60-af55-6500e2d54406!
	W1124 03:40:52.311440       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:52.318451       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:54.322101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:54.327443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:56.331080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:56.340182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:58.343055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:40:58.349135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:00.355624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:00.365545       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:02.368661       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:02.374294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:04.377180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:04.382410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:06.387153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:41:06.392418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-818836 -n embed-certs-818836
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-818836 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.67s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (16.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-774072 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [362be5db-8e55-42d3-af79-d334755f6b33] Pending
helpers_test.go:352: "busybox" [362be5db-8e55-42d3-af79-d334755f6b33] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [362be5db-8e55-42d3-af79-d334755f6b33] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.003423694s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-774072 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-774072
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-774072:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054",
	        "Created": "2025-11-24T03:42:32.790872915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:42:32.852301993Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054/hostname",
	        "HostsPath": "/var/lib/docker/containers/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054/hosts",
	        "LogPath": "/var/lib/docker/containers/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054-json.log",
	        "Name": "/default-k8s-diff-port-774072",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-774072:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-774072",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054",
	                "LowerDir": "/var/lib/docker/overlay2/54edbf09208c823b3c396bb17695a59fe8d58636538a81bc10631fa6bc4a3d32-init/diff:/var/lib/docker/overlay2/11b197f530f0d571f61892814d8d4c774f7d3e5a97abdd8c5aa182cc99b2d856/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54edbf09208c823b3c396bb17695a59fe8d58636538a81bc10631fa6bc4a3d32/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54edbf09208c823b3c396bb17695a59fe8d58636538a81bc10631fa6bc4a3d32/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54edbf09208c823b3c396bb17695a59fe8d58636538a81bc10631fa6bc4a3d32/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-774072",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-774072/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-774072",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-774072",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-774072",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edbc99db19d9ce34cdd7f8b3f10d059f72be5d332852c6323b29f8a4b1c22907",
	            "SandboxKey": "/var/run/docker/netns/edbc99db19d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-774072": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:aa:8b:6a:90:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "35922bea1d4340d491a486d80047f4b47e62915896a7161451ad82fc12397c15",
	                    "EndpointID": "81abac5145baa1986e4eaaec05cf0701c0a48f44911c2796584b55b708986cf0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-774072",
	                        "83cef67c2972"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-774072 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-774072 logs -n 25: (1.765529213s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-818836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:41 UTC │ 24 Nov 25 03:41 UTC │
	│ start   │ -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:41 UTC │ 24 Nov 25 03:42 UTC │
	│ image   │ no-preload-262280 image list --format=json                                                                                                                                                                                                          │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ pause   │ -p no-preload-262280 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ unpause │ -p no-preload-262280 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p no-preload-262280                                                                                                                                                                                                                                │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p no-preload-262280                                                                                                                                                                                                                                │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p disable-driver-mounts-973998                                                                                                                                                                                                                     │ disable-driver-mounts-973998 │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ start   │ -p default-k8s-diff-port-774072 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-774072 │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:43 UTC │
	│ image   │ embed-certs-818836 image list --format=json                                                                                                                                                                                                         │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ pause   │ -p embed-certs-818836 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ unpause │ -p embed-certs-818836 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p embed-certs-818836                                                                                                                                                                                                                               │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p embed-certs-818836                                                                                                                                                                                                                               │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ start   │ -p newest-cni-934324 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-934324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ stop    │ -p newest-cni-934324 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-934324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ start   │ -p newest-cni-934324 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ image   │ newest-cni-934324 image list --format=json                                                                                                                                                                                                          │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ pause   │ -p newest-cni-934324 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ unpause │ -p newest-cni-934324 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ delete  │ -p newest-cni-934324                                                                                                                                                                                                                                │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ delete  │ -p newest-cni-934324                                                                                                                                                                                                                                │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ start   │ -p auto-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-842431                  │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:43:38
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:43:38.708999  492561 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:43:38.709248  492561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:43:38.709279  492561 out.go:374] Setting ErrFile to fd 2...
	I1124 03:43:38.709299  492561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:43:38.709658  492561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:43:38.710142  492561 out.go:368] Setting JSON to false
	I1124 03:43:38.711166  492561 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8747,"bootTime":1763947072,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:43:38.711263  492561 start.go:143] virtualization:  
	I1124 03:43:38.715355  492561 out.go:179] * [auto-842431] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:43:38.719040  492561 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:43:38.719737  492561 notify.go:221] Checking for updates...
	I1124 03:43:38.725627  492561 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:43:38.728944  492561 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:43:38.732114  492561 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:43:38.735273  492561 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:43:38.738481  492561 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:43:38.742024  492561 config.go:182] Loaded profile config "default-k8s-diff-port-774072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:43:38.742187  492561 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:43:38.781673  492561 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:43:38.781816  492561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:43:38.856982  492561 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:43:38.846648515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:43:38.857100  492561 docker.go:319] overlay module found
	I1124 03:43:38.860261  492561 out.go:179] * Using the docker driver based on user configuration
	I1124 03:43:38.863285  492561 start.go:309] selected driver: docker
	I1124 03:43:38.863308  492561 start.go:927] validating driver "docker" against <nil>
	I1124 03:43:38.863323  492561 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:43:38.864085  492561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:43:38.927616  492561 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:43:38.917147635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:43:38.927791  492561 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:43:38.928020  492561 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:43:38.931610  492561 out.go:179] * Using Docker driver with root privileges
	I1124 03:43:38.934659  492561 cni.go:84] Creating CNI manager for ""
	I1124 03:43:38.934737  492561 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:43:38.934751  492561 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:43:38.934843  492561 start.go:353] cluster config:
	{Name:auto-842431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:43:38.938085  492561 out.go:179] * Starting "auto-842431" primary control-plane node in "auto-842431" cluster
	I1124 03:43:38.941137  492561 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:43:38.944132  492561 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:43:38.947208  492561 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:43:38.947244  492561 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:43:38.947262  492561 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 03:43:38.947274  492561 cache.go:65] Caching tarball of preloaded images
	I1124 03:43:38.947351  492561 preload.go:238] Found /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 03:43:38.947362  492561 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:43:38.947479  492561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/config.json ...
	I1124 03:43:38.947497  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/config.json: {Name:mkd95f1c431341967d7de6279832af4200a84b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:38.967198  492561 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:43:38.967248  492561 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:43:38.967276  492561 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:43:38.967316  492561 start.go:360] acquireMachinesLock for auto-842431: {Name:mk40b6975294d38f37d6a26343eed441c6c387a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:43:38.967436  492561 start.go:364] duration metric: took 96.887µs to acquireMachinesLock for "auto-842431"
	I1124 03:43:38.967467  492561 start.go:93] Provisioning new machine with config: &{Name:auto-842431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:43:38.967544  492561 start.go:125] createHost starting for "" (driver="docker")
	W1124 03:43:36.275131  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	W1124 03:43:38.773034  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	I1124 03:43:38.970910  492561 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:43:38.971154  492561 start.go:159] libmachine.API.Create for "auto-842431" (driver="docker")
	I1124 03:43:38.971194  492561 client.go:173] LocalClient.Create starting
	I1124 03:43:38.971281  492561 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem
	I1124 03:43:38.971325  492561 main.go:143] libmachine: Decoding PEM data...
	I1124 03:43:38.971352  492561 main.go:143] libmachine: Parsing certificate...
	I1124 03:43:38.971407  492561 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem
	I1124 03:43:38.971430  492561 main.go:143] libmachine: Decoding PEM data...
	I1124 03:43:38.971445  492561 main.go:143] libmachine: Parsing certificate...
	I1124 03:43:38.971825  492561 cli_runner.go:164] Run: docker network inspect auto-842431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:43:38.989763  492561 cli_runner.go:211] docker network inspect auto-842431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:43:38.989866  492561 network_create.go:284] running [docker network inspect auto-842431] to gather additional debugging logs...
	I1124 03:43:38.989889  492561 cli_runner.go:164] Run: docker network inspect auto-842431
	W1124 03:43:39.010335  492561 cli_runner.go:211] docker network inspect auto-842431 returned with exit code 1
	I1124 03:43:39.010366  492561 network_create.go:287] error running [docker network inspect auto-842431]: docker network inspect auto-842431: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-842431 not found
	I1124 03:43:39.010382  492561 network_create.go:289] output of [docker network inspect auto-842431]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-842431 not found
	
	** /stderr **
	I1124 03:43:39.010497  492561 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:43:39.028621  492561 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-752aaa40bb3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:00:20:e4:71:15} reservation:<nil>}
	I1124 03:43:39.029001  492561 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbb0dee281db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:ff:07:3e:91:0f} reservation:<nil>}
	I1124 03:43:39.029261  492561 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d95ffec60547 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:b5:f2:ed:07:1e} reservation:<nil>}
	I1124 03:43:39.029702  492561 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c0610}
	I1124 03:43:39.029737  492561 network_create.go:124] attempt to create docker network auto-842431 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 03:43:39.029793  492561 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-842431 auto-842431
	I1124 03:43:39.090328  492561 network_create.go:108] docker network auto-842431 192.168.76.0/24 created
	I1124 03:43:39.090366  492561 kic.go:121] calculated static IP "192.168.76.2" for the "auto-842431" container
	I1124 03:43:39.090459  492561 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:43:39.105807  492561 cli_runner.go:164] Run: docker volume create auto-842431 --label name.minikube.sigs.k8s.io=auto-842431 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:43:39.123009  492561 oci.go:103] Successfully created a docker volume auto-842431
	I1124 03:43:39.123099  492561 cli_runner.go:164] Run: docker run --rm --name auto-842431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-842431 --entrypoint /usr/bin/test -v auto-842431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:43:39.684618  492561 oci.go:107] Successfully prepared a docker volume auto-842431
	I1124 03:43:39.684691  492561 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:43:39.684706  492561 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:43:39.684774  492561 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-842431:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 03:43:41.273044  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	W1124 03:43:43.772289  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	I1124 03:43:44.140613  492561 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-842431:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.455801482s)
	I1124 03:43:44.140646  492561 kic.go:203] duration metric: took 4.455937024s to extract preloaded images to volume ...
	W1124 03:43:44.140792  492561 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:43:44.140912  492561 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:43:44.197466  492561 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-842431 --name auto-842431 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-842431 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-842431 --network auto-842431 --ip 192.168.76.2 --volume auto-842431:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:43:44.545977  492561 cli_runner.go:164] Run: docker container inspect auto-842431 --format={{.State.Running}}
	I1124 03:43:44.571385  492561 cli_runner.go:164] Run: docker container inspect auto-842431 --format={{.State.Status}}
	I1124 03:43:44.599329  492561 cli_runner.go:164] Run: docker exec auto-842431 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:43:44.658815  492561 oci.go:144] the created container "auto-842431" has a running status.
	I1124 03:43:44.658848  492561 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa...
	I1124 03:43:44.720036  492561 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:43:44.752103  492561 cli_runner.go:164] Run: docker container inspect auto-842431 --format={{.State.Status}}
	I1124 03:43:44.777961  492561 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:43:44.777985  492561 kic_runner.go:114] Args: [docker exec --privileged auto-842431 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:43:44.836360  492561 cli_runner.go:164] Run: docker container inspect auto-842431 --format={{.State.Status}}
	I1124 03:43:44.865401  492561 machine.go:94] provisionDockerMachine start ...
	I1124 03:43:44.865495  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:44.889479  492561 main.go:143] libmachine: Using SSH client type: native
	I1124 03:43:44.889820  492561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 03:43:44.889837  492561 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:43:44.890516  492561 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35870->127.0.0.1:33463: read: connection reset by peer
	I1124 03:43:48.040700  492561 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-842431
	
	I1124 03:43:48.040730  492561 ubuntu.go:182] provisioning hostname "auto-842431"
	I1124 03:43:48.040794  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.058621  492561 main.go:143] libmachine: Using SSH client type: native
	I1124 03:43:48.058937  492561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 03:43:48.058948  492561 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-842431 && echo "auto-842431" | sudo tee /etc/hostname
	I1124 03:43:48.213574  492561 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-842431
	
	I1124 03:43:48.213656  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.235691  492561 main.go:143] libmachine: Using SSH client type: native
	I1124 03:43:48.236017  492561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 03:43:48.236041  492561 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-842431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-842431/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-842431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:43:48.384657  492561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:43:48.384687  492561 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-255205/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-255205/.minikube}
	I1124 03:43:48.384721  492561 ubuntu.go:190] setting up certificates
	I1124 03:43:48.384730  492561 provision.go:84] configureAuth start
	I1124 03:43:48.384787  492561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-842431
	I1124 03:43:48.403581  492561 provision.go:143] copyHostCerts
	I1124 03:43:48.403647  492561 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem, removing ...
	I1124 03:43:48.403657  492561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem
	I1124 03:43:48.403998  492561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem (1078 bytes)
	I1124 03:43:48.404166  492561 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem, removing ...
	I1124 03:43:48.404176  492561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem
	I1124 03:43:48.404208  492561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem (1123 bytes)
	I1124 03:43:48.404270  492561 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem, removing ...
	I1124 03:43:48.404274  492561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem
	I1124 03:43:48.404299  492561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem (1675 bytes)
	I1124 03:43:48.404355  492561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem org=jenkins.auto-842431 san=[127.0.0.1 192.168.76.2 auto-842431 localhost minikube]
	I1124 03:43:48.502552  492561 provision.go:177] copyRemoteCerts
	I1124 03:43:48.502617  492561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:43:48.502655  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.520407  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:48.627546  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 03:43:48.646702  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:43:48.665013  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:43:48.685933  492561 provision.go:87] duration metric: took 301.180507ms to configureAuth
	I1124 03:43:48.685977  492561 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:43:48.686164  492561 config.go:182] Loaded profile config "auto-842431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:43:48.686180  492561 machine.go:97] duration metric: took 3.820756529s to provisionDockerMachine
	I1124 03:43:48.686187  492561 client.go:176] duration metric: took 9.714986585s to LocalClient.Create
	I1124 03:43:48.686200  492561 start.go:167] duration metric: took 9.715047845s to libmachine.API.Create "auto-842431"
	I1124 03:43:48.686210  492561 start.go:293] postStartSetup for "auto-842431" (driver="docker")
	I1124 03:43:48.686219  492561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:43:48.686279  492561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:43:48.686322  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.704673  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:48.808344  492561 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:43:48.811847  492561 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:43:48.811886  492561 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:43:48.811898  492561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/addons for local assets ...
	I1124 03:43:48.811957  492561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/files for local assets ...
	I1124 03:43:48.812039  492561 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem -> 2570692.pem in /etc/ssl/certs
	I1124 03:43:48.812143  492561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:43:48.820101  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:43:48.839925  492561 start.go:296] duration metric: took 153.699732ms for postStartSetup
	I1124 03:43:48.840315  492561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-842431
	I1124 03:43:48.857571  492561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/config.json ...
	I1124 03:43:48.857867  492561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:43:48.857916  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.876454  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:48.977861  492561 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:43:48.982949  492561 start.go:128] duration metric: took 10.015389692s to createHost
	I1124 03:43:48.982976  492561 start.go:83] releasing machines lock for "auto-842431", held for 10.015526333s
	I1124 03:43:48.983051  492561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-842431
	I1124 03:43:49.002420  492561 ssh_runner.go:195] Run: cat /version.json
	I1124 03:43:49.002483  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:49.002833  492561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:43:49.002916  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:49.024010  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:49.025436  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:49.215347  492561 ssh_runner.go:195] Run: systemctl --version
	I1124 03:43:49.228660  492561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:43:49.233682  492561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:43:49.233763  492561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:43:49.260679  492561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:43:49.260709  492561 start.go:496] detecting cgroup driver to use...
	I1124 03:43:49.260742  492561 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:43:49.260791  492561 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:43:49.276262  492561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:43:49.289612  492561 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:43:49.289697  492561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:43:49.307188  492561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:43:49.327639  492561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:43:49.456880  492561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:43:49.587435  492561 docker.go:234] disabling docker service ...
	I1124 03:43:49.587534  492561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:43:49.613023  492561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:43:49.632840  492561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:43:49.763313  492561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:43:49.884560  492561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:43:49.899497  492561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:43:49.914190  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:43:49.924860  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:43:49.935113  492561 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 03:43:49.935212  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 03:43:49.945284  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:43:49.954733  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:43:49.964699  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:43:49.974511  492561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:43:49.983768  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:43:49.993713  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:43:50.004742  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:43:50.018043  492561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:43:50.027253  492561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:43:50.036132  492561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:43:50.185883  492561 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:43:50.338124  492561 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:43:50.338271  492561 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:43:50.345116  492561 start.go:564] Will wait 60s for crictl version
	I1124 03:43:50.345305  492561 ssh_runner.go:195] Run: which crictl
	I1124 03:43:50.350011  492561 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:43:50.382640  492561 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:43:50.382780  492561 ssh_runner.go:195] Run: containerd --version
	I1124 03:43:50.403186  492561 ssh_runner.go:195] Run: containerd --version
	I1124 03:43:50.442249  492561 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1124 03:43:46.272849  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	W1124 03:43:48.772258  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	I1124 03:43:50.272818  482662 node_ready.go:49] node "default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:50.272846  482662 node_ready.go:38] duration metric: took 40.003383552s for node "default-k8s-diff-port-774072" to be "Ready" ...
	I1124 03:43:50.272860  482662 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:43:50.272916  482662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:43:50.294447  482662 api_server.go:72] duration metric: took 41.682124385s to wait for apiserver process to appear ...
	I1124 03:43:50.294472  482662 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:43:50.294492  482662 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 03:43:50.304851  482662 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 03:43:50.306160  482662 api_server.go:141] control plane version: v1.34.1
	I1124 03:43:50.306183  482662 api_server.go:131] duration metric: took 11.704728ms to wait for apiserver health ...
	I1124 03:43:50.306192  482662 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:43:50.310165  482662 system_pods.go:59] 8 kube-system pods found
	I1124 03:43:50.310196  482662 system_pods.go:61] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:50.310203  482662 system_pods.go:61] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:50.310208  482662 system_pods.go:61] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:50.310212  482662 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:50.310217  482662 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:50.310221  482662 system_pods.go:61] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:50.310224  482662 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:50.310231  482662 system_pods.go:61] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:50.310244  482662 system_pods.go:74] duration metric: took 4.047109ms to wait for pod list to return data ...
	I1124 03:43:50.310252  482662 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:43:50.313210  482662 default_sa.go:45] found service account: "default"
	I1124 03:43:50.313230  482662 default_sa.go:55] duration metric: took 2.972607ms for default service account to be created ...
	I1124 03:43:50.313239  482662 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:43:50.317249  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:50.317329  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:50.317355  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:50.317393  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:50.317419  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:50.317439  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:50.317477  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:50.317502  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:50.317524  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:50.317575  482662 retry.go:31] will retry after 204.854857ms: missing components: kube-dns
	I1124 03:43:50.534344  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:50.534377  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:50.534384  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:50.534393  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:50.534398  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:50.534402  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:50.534406  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:50.534410  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:50.534416  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:50.534431  482662 retry.go:31] will retry after 303.947041ms: missing components: kube-dns
	I1124 03:43:50.843331  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:50.843362  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:50.843368  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:50.843375  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:50.843379  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:50.843384  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:50.843388  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:50.843392  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:50.843401  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:50.843417  482662 retry.go:31] will retry after 479.793876ms: missing components: kube-dns
	I1124 03:43:50.445273  492561 cli_runner.go:164] Run: docker network inspect auto-842431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:43:50.462029  492561 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:43:50.466606  492561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:43:50.477193  492561 kubeadm.go:884] updating cluster {Name:auto-842431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:43:50.477315  492561 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:43:50.477389  492561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:43:50.501134  492561 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:43:50.501164  492561 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:43:50.501222  492561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:43:50.543116  492561 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:43:50.543195  492561 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:43:50.543218  492561 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 03:43:50.543357  492561 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-842431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:43:50.543461  492561 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:43:50.584882  492561 cni.go:84] Creating CNI manager for ""
	I1124 03:43:50.584902  492561 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:43:50.584923  492561 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:43:50.584955  492561 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-842431 NodeName:auto-842431 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:43:50.585075  492561 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-842431"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:43:50.585148  492561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:43:50.596696  492561 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:43:50.596818  492561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:43:50.608916  492561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1124 03:43:50.631598  492561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:43:50.647688  492561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:43:50.668913  492561 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:43:50.678499  492561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:43:50.694765  492561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:43:50.871837  492561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:43:50.892090  492561 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431 for IP: 192.168.76.2
	I1124 03:43:50.892160  492561 certs.go:195] generating shared ca certs ...
	I1124 03:43:50.892190  492561 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:50.892393  492561 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:43:50.892506  492561 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:43:50.892557  492561 certs.go:257] generating profile certs ...
	I1124 03:43:50.892645  492561 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.key
	I1124 03:43:50.892678  492561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt with IP's: []
	I1124 03:43:51.221611  492561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt ...
	I1124 03:43:51.221688  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: {Name:mkbfb3b11fa96a1355b7693402c3d99e9c6c04f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.221933  492561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.key ...
	I1124 03:43:51.221968  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.key: {Name:mk10b89b1e440af1917b232f886f66b7dc5d07a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.222114  492561 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key.bd98c337
	I1124 03:43:51.222152  492561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt.bd98c337 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:43:51.737672  492561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt.bd98c337 ...
	I1124 03:43:51.737708  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt.bd98c337: {Name:mk4bcc8ea38ebd3491e24f1f2d94b9d49900983a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.737897  492561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key.bd98c337 ...
	I1124 03:43:51.737912  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key.bd98c337: {Name:mka6b3b7ed566d487f1c7e4e27f303eab953a5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.738001  492561 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt.bd98c337 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt
	I1124 03:43:51.738092  492561 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key.bd98c337 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key
	I1124 03:43:51.738161  492561 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.key
	I1124 03:43:51.738178  492561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.crt with IP's: []
	I1124 03:43:51.804700  492561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.crt ...
	I1124 03:43:51.804733  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.crt: {Name:mk2e7f2853b3a0e922f6664a73c7b2940788847c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.804950  492561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.key ...
	I1124 03:43:51.804965  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.key: {Name:mkc5dc681d2349188ebb45b3500af314e4e7bb5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.805177  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:43:51.805227  492561 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:43:51.805240  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:43:51.805269  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:43:51.805300  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:43:51.805328  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:43:51.805388  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:43:51.805973  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:43:51.824305  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:43:51.843804  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:43:51.861226  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:43:51.878732  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 03:43:51.912490  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:43:51.954531  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:43:51.988758  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:43:52.015270  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:43:52.037001  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:43:52.058103  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:43:52.079134  492561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:43:52.094176  492561 ssh_runner.go:195] Run: openssl version
	I1124 03:43:52.100777  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:43:52.110095  492561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:43:52.114091  492561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:43:52.114154  492561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:43:52.155571  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:43:52.164152  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:43:52.173075  492561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:43:52.178330  492561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:43:52.178432  492561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:43:52.240601  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:43:52.250066  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:43:52.259257  492561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:43:52.263359  492561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:43:52.263428  492561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:43:52.307589  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:43:52.316342  492561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:43:52.320150  492561 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:43:52.320205  492561 kubeadm.go:401] StartCluster: {Name:auto-842431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:43:52.320297  492561 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:43:52.320357  492561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:43:52.359372  492561 cri.go:89] found id: ""
	I1124 03:43:52.359486  492561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:43:52.371930  492561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:43:52.382881  492561 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:43:52.382995  492561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:43:52.397046  492561 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:43:52.397122  492561 kubeadm.go:158] found existing configuration files:
	
	I1124 03:43:52.397208  492561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:43:52.410297  492561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:43:52.410404  492561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:43:52.418900  492561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:43:52.426900  492561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:43:52.426965  492561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:43:52.435169  492561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:43:52.444090  492561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:43:52.444212  492561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:43:52.452248  492561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:43:52.460115  492561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:43:52.460233  492561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:43:52.468356  492561 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:43:52.536295  492561 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:43:52.536643  492561 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:43:52.617094  492561 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:43:51.329080  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:51.329113  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:51.329120  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:51.329126  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:51.329130  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:51.329134  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:51.329138  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:51.329142  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:51.329147  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:51.329173  482662 retry.go:31] will retry after 434.091686ms: missing components: kube-dns
	I1124 03:43:51.772208  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:51.772248  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:51.772256  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:51.772279  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:51.772284  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:51.772288  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:51.772292  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:51.772296  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:51.772301  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:51.772316  482662 retry.go:31] will retry after 571.716917ms: missing components: kube-dns
	I1124 03:43:52.349169  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:52.349198  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Running
	I1124 03:43:52.349205  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:52.349211  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:52.349215  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:52.349220  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:52.349223  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:52.349228  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:52.349232  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Running
	I1124 03:43:52.349240  482662 system_pods.go:126] duration metric: took 2.03599487s to wait for k8s-apps to be running ...
	I1124 03:43:52.349246  482662 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:43:52.349300  482662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:43:52.364328  482662 system_svc.go:56] duration metric: took 15.071767ms WaitForService to wait for kubelet
	I1124 03:43:52.364355  482662 kubeadm.go:587] duration metric: took 43.752037263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:43:52.364373  482662 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:43:52.368734  482662 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:43:52.368764  482662 node_conditions.go:123] node cpu capacity is 2
	I1124 03:43:52.368777  482662 node_conditions.go:105] duration metric: took 4.399677ms to run NodePressure ...
	I1124 03:43:52.368791  482662 start.go:242] waiting for startup goroutines ...
	I1124 03:43:52.368798  482662 start.go:247] waiting for cluster config update ...
	I1124 03:43:52.368809  482662 start.go:256] writing updated cluster config ...
	I1124 03:43:52.369096  482662 ssh_runner.go:195] Run: rm -f paused
	I1124 03:43:52.374616  482662 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:43:52.378601  482662 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jgtk7" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.385966  482662 pod_ready.go:94] pod "coredns-66bc5c9577-jgtk7" is "Ready"
	I1124 03:43:52.386042  482662 pod_ready.go:86] duration metric: took 7.419817ms for pod "coredns-66bc5c9577-jgtk7" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.388928  482662 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.396396  482662 pod_ready.go:94] pod "etcd-default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:52.396490  482662 pod_ready.go:86] duration metric: took 7.491489ms for pod "etcd-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.400005  482662 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.405564  482662 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:52.405639  482662 pod_ready.go:86] duration metric: took 5.604387ms for pod "kube-apiserver-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.409289  482662 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.779984  482662 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:52.780014  482662 pod_ready.go:86] duration metric: took 370.652265ms for pod "kube-controller-manager-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.980908  482662 pod_ready.go:83] waiting for pod "kube-proxy-27m9s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:53.380832  482662 pod_ready.go:94] pod "kube-proxy-27m9s" is "Ready"
	I1124 03:43:53.380860  482662 pod_ready.go:86] duration metric: took 399.927379ms for pod "kube-proxy-27m9s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:53.580314  482662 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:53.979496  482662 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:53.979572  482662 pod_ready.go:86] duration metric: took 399.230209ms for pod "kube-scheduler-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:53.979609  482662 pod_ready.go:40] duration metric: took 1.60496335s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:43:54.063525  482662 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:43:54.067108  482662 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-774072" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	56f984231e308       1611cd07b61d5       8 seconds ago        Running             busybox                   0                   f2eb65427376f       busybox                                                default
	00e39fe2c36ce       138784d87c9c5       15 seconds ago       Running             coredns                   0                   a521b3af92233       coredns-66bc5c9577-jgtk7                               kube-system
	3aedae55cebdd       ba04bb24b9575       15 seconds ago       Running             storage-provisioner       0                   d5714bd29c09f       storage-provisioner                                    kube-system
	a0a1945067a8f       05baa95f5142d       55 seconds ago       Running             kube-proxy                0                   26be55cf0f473       kube-proxy-27m9s                                       kube-system
	dc1ff819cdf52       b1a8c6f707935       56 seconds ago       Running             kindnet-cni               0                   a1df1ff74e137       kindnet-2prqp                                          kube-system
	e9f0e2a57bbbc       a1894772a478e       About a minute ago   Running             etcd                      0                   89a5cc6d3af5a       etcd-default-k8s-diff-port-774072                      kube-system
	52a07b03fab83       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   be9e9c09873ed       kube-apiserver-default-k8s-diff-port-774072            kube-system
	ba331bec1dcb5       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   77f739542d214       kube-scheduler-default-k8s-diff-port-774072            kube-system
	bd316fbcb0bdb       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   366e108c422cc       kube-controller-manager-default-k8s-diff-port-774072   kube-system
	
	
	==> containerd <==
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.743841732Z" level=info msg="CreateContainer within sandbox \"d5714bd29c09f292dec1088cdb6f274865a92047b0ef73f81bbf3ec14e678fac\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5\""
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.744966384Z" level=info msg="StartContainer for \"3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5\""
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.746792710Z" level=info msg="connecting to shim 3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5" address="unix:///run/containerd/s/1d4e6b00a22d8603ea8830cf7c22e0c059b4606c5a061f41ce0704d75d309a92" protocol=ttrpc version=3
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.760780818Z" level=info msg="Container 00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.781991031Z" level=info msg="CreateContainer within sandbox \"a521b3af92233747f693570dda8b3453286615fcc9cf60488afa868c9389cb01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65\""
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.787261319Z" level=info msg="StartContainer for \"00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65\""
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.792709118Z" level=info msg="connecting to shim 00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65" address="unix:///run/containerd/s/07f043158c6587a75d849ba277381802749153ec92053492d6aa582a8f009099" protocol=ttrpc version=3
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.963186631Z" level=info msg="StartContainer for \"3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5\" returns successfully"
	Nov 24 03:43:51 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:51.050302245Z" level=info msg="StartContainer for \"00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65\" returns successfully"
	Nov 24 03:43:54 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:54.666488277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:362be5db-8e55-42d3-af79-d334755f6b33,Namespace:default,Attempt:0,}"
	Nov 24 03:43:54 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:54.752408709Z" level=info msg="connecting to shim f2eb65427376f46f3c4b42ee8e956f570dab41e978d79490f7db2c4cdda76dbc" address="unix:///run/containerd/s/917b63682aeac47978163afc8198993b032393ce7f3fa36e23c01e0492e45405" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:43:54 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:54.884975934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:362be5db-8e55-42d3-af79-d334755f6b33,Namespace:default,Attempt:0,} returns sandbox id \"f2eb65427376f46f3c4b42ee8e956f570dab41e978d79490f7db2c4cdda76dbc\""
	Nov 24 03:43:54 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:54.890794944Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.252276188Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.254321388Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937187"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.256784338Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.260701879Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.261933821Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.370953941s"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.262262784Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.275817773Z" level=info msg="CreateContainer within sandbox \"f2eb65427376f46f3c4b42ee8e956f570dab41e978d79490f7db2c4cdda76dbc\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.290384429Z" level=info msg="Container 56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.301020662Z" level=info msg="CreateContainer within sandbox \"f2eb65427376f46f3c4b42ee8e956f570dab41e978d79490f7db2c4cdda76dbc\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26\""
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.304634430Z" level=info msg="StartContainer for \"56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26\""
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.307364492Z" level=info msg="connecting to shim 56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26" address="unix:///run/containerd/s/917b63682aeac47978163afc8198993b032393ce7f3fa36e23c01e0492e45405" protocol=ttrpc version=3
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.408995736Z" level=info msg="StartContainer for \"56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26\" returns successfully"
	
	
	==> coredns [00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53503 - 55369 "HINFO IN 8793316064930919524.7051367551294291303. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031857408s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-774072
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-774072
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-774072
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_43_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:42:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-774072
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:44:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:44:05 +0000   Mon, 24 Nov 2025 03:42:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:44:05 +0000   Mon, 24 Nov 2025 03:42:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:44:05 +0000   Mon, 24 Nov 2025 03:42:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:44:05 +0000   Mon, 24 Nov 2025 03:43:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-774072
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                b34292c3-f00c-4314-8f46-89239011216f
	  Boot ID:                    63a8a852-1462-44b1-9d6f-f77d26e8568f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-jgtk7                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     58s
	  kube-system                 etcd-default-k8s-diff-port-774072                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         66s
	  kube-system                 kindnet-2prqp                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-default-k8s-diff-port-774072             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-774072    200m (10%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-proxy-27m9s                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-default-k8s-diff-port-774072             100m (5%)     0 (0%)      0 (0%)           0 (0%)         64s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeAllocatableEnforced  75s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 75s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  75s (x8 over 75s)  kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    75s (x8 over 75s)  kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     75s (x7 over 75s)  kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasSufficientPID
	  Normal   Starting                 75s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           59s                node-controller  Node default-k8s-diff-port-774072 event: Registered Node default-k8s-diff-port-774072 in Controller
	  Normal   NodeReady                16s                kubelet          Node default-k8s-diff-port-774072 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:27] overlayfs: idmapped layers are currently not supported
	[Nov24 02:28] overlayfs: idmapped layers are currently not supported
	[Nov24 02:30] overlayfs: idmapped layers are currently not supported
	[  +9.824160] overlayfs: idmapped layers are currently not supported
	[Nov24 02:31] overlayfs: idmapped layers are currently not supported
	[Nov24 02:32] overlayfs: idmapped layers are currently not supported
	[ +27.981383] overlayfs: idmapped layers are currently not supported
	[Nov24 02:33] overlayfs: idmapped layers are currently not supported
	[Nov24 02:34] overlayfs: idmapped layers are currently not supported
	[Nov24 02:35] overlayfs: idmapped layers are currently not supported
	[Nov24 02:36] overlayfs: idmapped layers are currently not supported
	[Nov24 02:37] overlayfs: idmapped layers are currently not supported
	[Nov24 02:38] overlayfs: idmapped layers are currently not supported
	[Nov24 02:39] overlayfs: idmapped layers are currently not supported
	[ +24.837346] overlayfs: idmapped layers are currently not supported
	[Nov24 02:40] overlayfs: idmapped layers are currently not supported
	[ +40.823948] overlayfs: idmapped layers are currently not supported
	[  +1.705989] overlayfs: idmapped layers are currently not supported
	[Nov24 02:42] overlayfs: idmapped layers are currently not supported
	[ +21.661904] overlayfs: idmapped layers are currently not supported
	[Nov24 02:44] overlayfs: idmapped layers are currently not supported
	[  +1.074777] overlayfs: idmapped layers are currently not supported
	[Nov24 02:46] overlayfs: idmapped layers are currently not supported
	[ +19.120392] overlayfs: idmapped layers are currently not supported
	[Nov24 02:48] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [e9f0e2a57bbbcfee223e5d178a062cb80bd6225e0085b14ac5aad3d60b31cd5b] <==
	{"level":"warn","ts":"2025-11-24T03:42:57.353674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.419671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.434775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.508920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.526830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.563981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.586751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.633885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.714856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.726991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.761385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.788901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.821626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.854213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.900172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.934615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.961185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.998910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.015491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.044567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.076248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.093025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.122439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.140857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.280812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49720","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:44:06 up  2:26,  0 user,  load average: 5.81, 4.41, 3.40
	Linux default-k8s-diff-port-774072 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dc1ff819cdf5287b5b8789ac09225c931ea261a4c3ac9d97c8ebffbd5e511c42] <==
	I1124 03:43:09.826449       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:43:09.827945       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:43:09.828104       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:43:09.828117       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:43:09.828132       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:43:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:43:10.035204       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:43:10.035226       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:43:10.035234       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:43:10.035554       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 03:43:40.039763       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 03:43:40.039932       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 03:43:40.040010       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 03:43:40.040090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1124 03:43:41.736228       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:43:41.736266       1 metrics.go:72] Registering metrics
	I1124 03:43:41.736334       1 controller.go:711] "Syncing nftables rules"
	I1124 03:43:50.037082       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:43:50.037136       1 main.go:301] handling current node
	I1124 03:44:00.036867       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:44:00.036916       1 main.go:301] handling current node
	
	
	==> kube-apiserver [52a07b03fab83f0354b99cda2a352a848f5d743749226cd71a933518934bfc74] <==
	I1124 03:42:59.672116       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1124 03:42:59.690788       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 03:42:59.695640       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:42:59.755468       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:42:59.756353       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:42:59.769579       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:42:59.898868       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:43:00.118776       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:43:00.132790       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:43:00.133954       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:43:01.698758       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:43:01.778227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:43:01.859635       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:43:01.871106       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:43:01.872628       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:43:01.878264       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:43:02.459070       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:43:03.081760       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:43:03.107001       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:43:03.122860       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:43:07.806068       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:43:08.209243       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:43:08.217187       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:43:08.626944       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 03:44:04.585382       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:54964: use of closed network connection
	
	
	==> kube-controller-manager [bd316fbcb0bdb1412b6625831cdf21ae957836aeb36b4dfb316548f954754911] <==
	I1124 03:43:07.602459       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:43:07.602741       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 03:43:07.603243       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:43:07.592743       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:43:07.609702       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:43:07.611670       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:43:07.620963       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:43:07.622154       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 03:43:07.627180       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:43:07.636309       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-774072" podCIDRs=["10.244.0.0/24"]
	I1124 03:43:07.637769       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:43:07.650004       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:43:07.650313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:43:07.650539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:43:07.650605       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:43:07.650668       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:43:07.650676       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:43:07.650681       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:43:07.650752       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:43:07.651345       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:43:07.651925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:43:07.652004       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:43:07.652034       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:43:07.652255       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:43:52.608336       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a0a1945067a8f9db6c41a9276bcaf5758a935d836a628957d0257595ad0648f6] <==
	I1124 03:43:10.518396       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:43:10.598365       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:43:10.699367       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:43:10.699411       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:43:10.699524       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:43:10.725223       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:43:10.725297       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:43:10.730350       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:43:10.730807       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:43:10.730833       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:43:10.733874       1 config.go:200] "Starting service config controller"
	I1124 03:43:10.733900       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:43:10.733962       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:43:10.733977       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:43:10.734000       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:43:10.734032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:43:10.738138       1 config.go:309] "Starting node config controller"
	I1124 03:43:10.738175       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:43:10.738184       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:43:10.834127       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:43:10.834142       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:43:10.834180       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ba331bec1dcb5982c85300f5dc3a0d66515f5e944f0be718440710fb98498763] <==
	E1124 03:42:59.837441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:42:59.837507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:42:59.837571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:42:59.837638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:42:59.837692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:42:59.837759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:42:59.837828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:42:59.837977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:42:59.838046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:42:59.838172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:42:59.838241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:42:59.838298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:42:59.838390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:43:00.669568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:43:00.671757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:43:00.774376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:43:00.830022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:43:00.874430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:43:00.938218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:43:00.998388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:43:01.039727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:43:01.043646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 03:43:01.094568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:43:01.228878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1124 03:43:02.978659       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:43:04 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:04.623522    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-774072" podStartSLOduration=2.62346106 podStartE2EDuration="2.62346106s" podCreationTimestamp="2025-11-24 03:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:04.585232515 +0000 UTC m=+1.544026592" watchObservedRunningTime="2025-11-24 03:43:04.62346106 +0000 UTC m=+1.582255121"
	Nov 24 03:43:04 default-k8s-diff-port-774072 kubelet[1454]: E1124 03:43:04.647742    1454 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-774072\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-774072"
	Nov 24 03:43:07 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:07.682284    1454 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:43:07 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:07.683415    1454 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.792729    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c770b4cd-7775-4aac-aa0a-1fa63016eb77-cni-cfg\") pod \"kindnet-2prqp\" (UID: \"c770b4cd-7775-4aac-aa0a-1fa63016eb77\") " pod="kube-system/kindnet-2prqp"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.792771    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c770b4cd-7775-4aac-aa0a-1fa63016eb77-lib-modules\") pod \"kindnet-2prqp\" (UID: \"c770b4cd-7775-4aac-aa0a-1fa63016eb77\") " pod="kube-system/kindnet-2prqp"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.792814    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqwdv\" (UniqueName: \"kubernetes.io/projected/c770b4cd-7775-4aac-aa0a-1fa63016eb77-kube-api-access-mqwdv\") pod \"kindnet-2prqp\" (UID: \"c770b4cd-7775-4aac-aa0a-1fa63016eb77\") " pod="kube-system/kindnet-2prqp"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.792837    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c770b4cd-7775-4aac-aa0a-1fa63016eb77-xtables-lock\") pod \"kindnet-2prqp\" (UID: \"c770b4cd-7775-4aac-aa0a-1fa63016eb77\") " pod="kube-system/kindnet-2prqp"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: E1124 03:43:08.796909    1454 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-774072\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-774072' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.900806    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/87f0e4dc-0625-4dc4-b724-459a8547efb5-kube-proxy\") pod \"kube-proxy-27m9s\" (UID: \"87f0e4dc-0625-4dc4-b724-459a8547efb5\") " pod="kube-system/kube-proxy-27m9s"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.900854    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9mj\" (UniqueName: \"kubernetes.io/projected/87f0e4dc-0625-4dc4-b724-459a8547efb5-kube-api-access-4f9mj\") pod \"kube-proxy-27m9s\" (UID: \"87f0e4dc-0625-4dc4-b724-459a8547efb5\") " pod="kube-system/kube-proxy-27m9s"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.900922    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87f0e4dc-0625-4dc4-b724-459a8547efb5-lib-modules\") pod \"kube-proxy-27m9s\" (UID: \"87f0e4dc-0625-4dc4-b724-459a8547efb5\") " pod="kube-system/kube-proxy-27m9s"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.900940    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87f0e4dc-0625-4dc4-b724-459a8547efb5-xtables-lock\") pod \"kube-proxy-27m9s\" (UID: \"87f0e4dc-0625-4dc4-b724-459a8547efb5\") " pod="kube-system/kube-proxy-27m9s"
	Nov 24 03:43:09 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:09.023416    1454 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 03:43:10 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:10.681563    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27m9s" podStartSLOduration=2.681546414 podStartE2EDuration="2.681546414s" podCreationTimestamp="2025-11-24 03:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:10.677195049 +0000 UTC m=+7.635989159" watchObservedRunningTime="2025-11-24 03:43:10.681546414 +0000 UTC m=+7.640340475"
	Nov 24 03:43:11 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:11.689499    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2prqp" podStartSLOduration=3.689477365 podStartE2EDuration="3.689477365s" podCreationTimestamp="2025-11-24 03:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:10.69580657 +0000 UTC m=+7.654600623" watchObservedRunningTime="2025-11-24 03:43:11.689477365 +0000 UTC m=+8.648271426"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.142158    1454 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.220169    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d4a7d3a-e840-4348-a3ce-f56234bb94c3-tmp\") pod \"storage-provisioner\" (UID: \"2d4a7d3a-e840-4348-a3ce-f56234bb94c3\") " pod="kube-system/storage-provisioner"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.220336    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4ls\" (UniqueName: \"kubernetes.io/projected/2d4a7d3a-e840-4348-a3ce-f56234bb94c3-kube-api-access-bf4ls\") pod \"storage-provisioner\" (UID: \"2d4a7d3a-e840-4348-a3ce-f56234bb94c3\") " pod="kube-system/storage-provisioner"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.321034    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bjw8\" (UniqueName: \"kubernetes.io/projected/7dea22e8-aa22-44cd-99fc-82662424e440-kube-api-access-7bjw8\") pod \"coredns-66bc5c9577-jgtk7\" (UID: \"7dea22e8-aa22-44cd-99fc-82662424e440\") " pod="kube-system/coredns-66bc5c9577-jgtk7"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.321312    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dea22e8-aa22-44cd-99fc-82662424e440-config-volume\") pod \"coredns-66bc5c9577-jgtk7\" (UID: \"7dea22e8-aa22-44cd-99fc-82662424e440\") " pod="kube-system/coredns-66bc5c9577-jgtk7"
	Nov 24 03:43:51 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:51.941314    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.941295047 podStartE2EDuration="41.941295047s" podCreationTimestamp="2025-11-24 03:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:51.941053273 +0000 UTC m=+48.899847334" watchObservedRunningTime="2025-11-24 03:43:51.941295047 +0000 UTC m=+48.900089100"
	Nov 24 03:43:51 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:51.941449    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jgtk7" podStartSLOduration=43.941441485 podStartE2EDuration="43.941441485s" podCreationTimestamp="2025-11-24 03:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:51.871328022 +0000 UTC m=+48.830122099" watchObservedRunningTime="2025-11-24 03:43:51.941441485 +0000 UTC m=+48.900235546"
	Nov 24 03:43:54 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:54.376573    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tw4\" (UniqueName: \"kubernetes.io/projected/362be5db-8e55-42d3-af79-d334755f6b33-kube-api-access-74tw4\") pod \"busybox\" (UID: \"362be5db-8e55-42d3-af79-d334755f6b33\") " pod="default/busybox"
	Nov 24 03:43:57 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:57.823050    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.44810191 podStartE2EDuration="3.823022524s" podCreationTimestamp="2025-11-24 03:43:54 +0000 UTC" firstStartedPulling="2025-11-24 03:43:54.889742473 +0000 UTC m=+51.848536534" lastFinishedPulling="2025-11-24 03:43:57.264663096 +0000 UTC m=+54.223457148" observedRunningTime="2025-11-24 03:43:57.822342322 +0000 UTC m=+54.781136383" watchObservedRunningTime="2025-11-24 03:43:57.823022524 +0000 UTC m=+54.781816576"
	
	
	==> storage-provisioner [3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5] <==
	I1124 03:43:50.964370       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 03:43:50.997994       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 03:43:50.998606       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 03:43:51.003007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:51.012203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:43:51.013174       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:43:51.016095       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-774072_0db07085-7e4b-4b72-b965-36b9f8c3eb11!
	I1124 03:43:51.024931       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3251c2da-243b-4662-8625-678dc3c80640", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-774072_0db07085-7e4b-4b72-b965-36b9f8c3eb11 became leader
	W1124 03:43:51.026031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:51.039740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:43:51.121702       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-774072_0db07085-7e4b-4b72-b965-36b9f8c3eb11!
	W1124 03:43:53.043622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:53.052567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:55.057028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:55.063272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:57.067294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:57.073187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:59.076218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:59.082363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:01.087560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:01.097571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:03.103470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:03.111050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:05.114844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:05.134316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-774072 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-774072
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-774072:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054",
	        "Created": "2025-11-24T03:42:32.790872915Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 484005,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T03:42:32.852301993Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:fbb44bc62521f331457dff002aaa5e1e27856f9e53853b3b3ee62969be454028",
	        "ResolvConfPath": "/var/lib/docker/containers/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054/hostname",
	        "HostsPath": "/var/lib/docker/containers/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054/hosts",
	        "LogPath": "/var/lib/docker/containers/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054/83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054-json.log",
	        "Name": "/default-k8s-diff-port-774072",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-774072:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-774072",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "83cef67c2972bc25768742627bd769418bc44b7c9617dabd32fa943949fa0054",
	                "LowerDir": "/var/lib/docker/overlay2/54edbf09208c823b3c396bb17695a59fe8d58636538a81bc10631fa6bc4a3d32-init/diff:/var/lib/docker/overlay2/11b197f530f0d571f61892814d8d4c774f7d3e5a97abdd8c5aa182cc99b2d856/diff",
	                "MergedDir": "/var/lib/docker/overlay2/54edbf09208c823b3c396bb17695a59fe8d58636538a81bc10631fa6bc4a3d32/merged",
	                "UpperDir": "/var/lib/docker/overlay2/54edbf09208c823b3c396bb17695a59fe8d58636538a81bc10631fa6bc4a3d32/diff",
	                "WorkDir": "/var/lib/docker/overlay2/54edbf09208c823b3c396bb17695a59fe8d58636538a81bc10631fa6bc4a3d32/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-774072",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-774072/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-774072",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-774072",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-774072",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "edbc99db19d9ce34cdd7f8b3f10d059f72be5d332852c6323b29f8a4b1c22907",
	            "SandboxKey": "/var/run/docker/netns/edbc99db19d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-774072": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b2:aa:8b:6a:90:40",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "35922bea1d4340d491a486d80047f4b47e62915896a7161451ad82fc12397c15",
	                    "EndpointID": "81abac5145baa1986e4eaaec05cf0701c0a48f44911c2796584b55b708986cf0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-774072",
	                        "83cef67c2972"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-774072 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-774072 logs -n 25: (1.895014153s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-818836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:41 UTC │ 24 Nov 25 03:41 UTC │
	│ start   │ -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:41 UTC │ 24 Nov 25 03:42 UTC │
	│ image   │ no-preload-262280 image list --format=json                                                                                                                                                                                                          │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ pause   │ -p no-preload-262280 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ unpause │ -p no-preload-262280 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p no-preload-262280                                                                                                                                                                                                                                │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p no-preload-262280                                                                                                                                                                                                                                │ no-preload-262280            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p disable-driver-mounts-973998                                                                                                                                                                                                                     │ disable-driver-mounts-973998 │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ start   │ -p default-k8s-diff-port-774072 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-774072 │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:43 UTC │
	│ image   │ embed-certs-818836 image list --format=json                                                                                                                                                                                                         │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ pause   │ -p embed-certs-818836 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ unpause │ -p embed-certs-818836 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p embed-certs-818836                                                                                                                                                                                                                               │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ delete  │ -p embed-certs-818836                                                                                                                                                                                                                               │ embed-certs-818836           │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:42 UTC │
	│ start   │ -p newest-cni-934324 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:42 UTC │ 24 Nov 25 03:43 UTC │
	│ addons  │ enable metrics-server -p newest-cni-934324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ stop    │ -p newest-cni-934324 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ addons  │ enable dashboard -p newest-cni-934324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ start   │ -p newest-cni-934324 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ image   │ newest-cni-934324 image list --format=json                                                                                                                                                                                                          │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ pause   │ -p newest-cni-934324 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ unpause │ -p newest-cni-934324 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ delete  │ -p newest-cni-934324                                                                                                                                                                                                                                │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ delete  │ -p newest-cni-934324                                                                                                                                                                                                                                │ newest-cni-934324            │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │ 24 Nov 25 03:43 UTC │
	│ start   │ -p auto-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-842431                  │ jenkins │ v1.37.0 │ 24 Nov 25 03:43 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 03:43:38
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 03:43:38.708999  492561 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:43:38.709248  492561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:43:38.709279  492561 out.go:374] Setting ErrFile to fd 2...
	I1124 03:43:38.709299  492561 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:43:38.709658  492561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:43:38.710142  492561 out.go:368] Setting JSON to false
	I1124 03:43:38.711166  492561 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8747,"bootTime":1763947072,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:43:38.711263  492561 start.go:143] virtualization:  
	I1124 03:43:38.715355  492561 out.go:179] * [auto-842431] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:43:38.719040  492561 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:43:38.719737  492561 notify.go:221] Checking for updates...
	I1124 03:43:38.725627  492561 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:43:38.728944  492561 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:43:38.732114  492561 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:43:38.735273  492561 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:43:38.738481  492561 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:43:38.742024  492561 config.go:182] Loaded profile config "default-k8s-diff-port-774072": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:43:38.742187  492561 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:43:38.781673  492561 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:43:38.781816  492561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:43:38.856982  492561 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:43:38.846648515 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:43:38.857100  492561 docker.go:319] overlay module found
	I1124 03:43:38.860261  492561 out.go:179] * Using the docker driver based on user configuration
	I1124 03:43:38.863285  492561 start.go:309] selected driver: docker
	I1124 03:43:38.863308  492561 start.go:927] validating driver "docker" against <nil>
	I1124 03:43:38.863323  492561 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:43:38.864085  492561 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:43:38.927616  492561 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:43:38.917147635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:43:38.927791  492561 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 03:43:38.928020  492561 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:43:38.931610  492561 out.go:179] * Using Docker driver with root privileges
	I1124 03:43:38.934659  492561 cni.go:84] Creating CNI manager for ""
	I1124 03:43:38.934737  492561 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:43:38.934751  492561 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 03:43:38.934843  492561 start.go:353] cluster config:
	{Name:auto-842431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:43:38.938085  492561 out.go:179] * Starting "auto-842431" primary control-plane node in "auto-842431" cluster
	I1124 03:43:38.941137  492561 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 03:43:38.944132  492561 out.go:179] * Pulling base image v0.0.48-1763935653-21975 ...
	I1124 03:43:38.947208  492561 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:43:38.947244  492561 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 03:43:38.947262  492561 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 03:43:38.947274  492561 cache.go:65] Caching tarball of preloaded images
	I1124 03:43:38.947351  492561 preload.go:238] Found /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1124 03:43:38.947362  492561 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1124 03:43:38.947479  492561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/config.json ...
	I1124 03:43:38.947497  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/config.json: {Name:mkd95f1c431341967d7de6279832af4200a84b15 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:38.967198  492561 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon, skipping pull
	I1124 03:43:38.967248  492561 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in daemon, skipping load
	I1124 03:43:38.967276  492561 cache.go:243] Successfully downloaded all kic artifacts
	I1124 03:43:38.967316  492561 start.go:360] acquireMachinesLock for auto-842431: {Name:mk40b6975294d38f37d6a26343eed441c6c387a4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 03:43:38.967436  492561 start.go:364] duration metric: took 96.887µs to acquireMachinesLock for "auto-842431"
	I1124 03:43:38.967467  492561 start.go:93] Provisioning new machine with config: &{Name:auto-842431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1124 03:43:38.967544  492561 start.go:125] createHost starting for "" (driver="docker")
	W1124 03:43:36.275131  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	W1124 03:43:38.773034  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	I1124 03:43:38.970910  492561 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 03:43:38.971154  492561 start.go:159] libmachine.API.Create for "auto-842431" (driver="docker")
	I1124 03:43:38.971194  492561 client.go:173] LocalClient.Create starting
	I1124 03:43:38.971281  492561 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem
	I1124 03:43:38.971325  492561 main.go:143] libmachine: Decoding PEM data...
	I1124 03:43:38.971352  492561 main.go:143] libmachine: Parsing certificate...
	I1124 03:43:38.971407  492561 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem
	I1124 03:43:38.971430  492561 main.go:143] libmachine: Decoding PEM data...
	I1124 03:43:38.971445  492561 main.go:143] libmachine: Parsing certificate...
	I1124 03:43:38.971825  492561 cli_runner.go:164] Run: docker network inspect auto-842431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 03:43:38.989763  492561 cli_runner.go:211] docker network inspect auto-842431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 03:43:38.989866  492561 network_create.go:284] running [docker network inspect auto-842431] to gather additional debugging logs...
	I1124 03:43:38.989889  492561 cli_runner.go:164] Run: docker network inspect auto-842431
	W1124 03:43:39.010335  492561 cli_runner.go:211] docker network inspect auto-842431 returned with exit code 1
	I1124 03:43:39.010366  492561 network_create.go:287] error running [docker network inspect auto-842431]: docker network inspect auto-842431: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-842431 not found
	I1124 03:43:39.010382  492561 network_create.go:289] output of [docker network inspect auto-842431]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-842431 not found
	
	** /stderr **
	I1124 03:43:39.010497  492561 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:43:39.028621  492561 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-752aaa40bb3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:00:20:e4:71:15} reservation:<nil>}
	I1124 03:43:39.029001  492561 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-cbb0dee281db IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:76:ff:07:3e:91:0f} reservation:<nil>}
	I1124 03:43:39.029261  492561 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-d95ffec60547 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:fe:b5:f2:ed:07:1e} reservation:<nil>}
	I1124 03:43:39.029702  492561 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c0610}
	I1124 03:43:39.029737  492561 network_create.go:124] attempt to create docker network auto-842431 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 03:43:39.029793  492561 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-842431 auto-842431
	I1124 03:43:39.090328  492561 network_create.go:108] docker network auto-842431 192.168.76.0/24 created
	I1124 03:43:39.090366  492561 kic.go:121] calculated static IP "192.168.76.2" for the "auto-842431" container
	I1124 03:43:39.090459  492561 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 03:43:39.105807  492561 cli_runner.go:164] Run: docker volume create auto-842431 --label name.minikube.sigs.k8s.io=auto-842431 --label created_by.minikube.sigs.k8s.io=true
	I1124 03:43:39.123009  492561 oci.go:103] Successfully created a docker volume auto-842431
	I1124 03:43:39.123099  492561 cli_runner.go:164] Run: docker run --rm --name auto-842431-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-842431 --entrypoint /usr/bin/test -v auto-842431:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -d /var/lib
	I1124 03:43:39.684618  492561 oci.go:107] Successfully prepared a docker volume auto-842431
	I1124 03:43:39.684691  492561 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:43:39.684706  492561 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 03:43:39.684774  492561 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-842431:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 03:43:41.273044  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	W1124 03:43:43.772289  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	I1124 03:43:44.140613  492561 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-842431:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 -I lz4 -xf /preloaded.tar -C /extractDir: (4.455801482s)
	I1124 03:43:44.140646  492561 kic.go:203] duration metric: took 4.455937024s to extract preloaded images to volume ...
	W1124 03:43:44.140792  492561 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1124 03:43:44.140912  492561 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 03:43:44.197466  492561 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-842431 --name auto-842431 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-842431 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-842431 --network auto-842431 --ip 192.168.76.2 --volume auto-842431:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787
	I1124 03:43:44.545977  492561 cli_runner.go:164] Run: docker container inspect auto-842431 --format={{.State.Running}}
	I1124 03:43:44.571385  492561 cli_runner.go:164] Run: docker container inspect auto-842431 --format={{.State.Status}}
	I1124 03:43:44.599329  492561 cli_runner.go:164] Run: docker exec auto-842431 stat /var/lib/dpkg/alternatives/iptables
	I1124 03:43:44.658815  492561 oci.go:144] the created container "auto-842431" has a running status.
	I1124 03:43:44.658848  492561 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa...
	I1124 03:43:44.720036  492561 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 03:43:44.752103  492561 cli_runner.go:164] Run: docker container inspect auto-842431 --format={{.State.Status}}
	I1124 03:43:44.777961  492561 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 03:43:44.777985  492561 kic_runner.go:114] Args: [docker exec --privileged auto-842431 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 03:43:44.836360  492561 cli_runner.go:164] Run: docker container inspect auto-842431 --format={{.State.Status}}
	I1124 03:43:44.865401  492561 machine.go:94] provisionDockerMachine start ...
	I1124 03:43:44.865495  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:44.889479  492561 main.go:143] libmachine: Using SSH client type: native
	I1124 03:43:44.889820  492561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 03:43:44.889837  492561 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 03:43:44.890516  492561 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35870->127.0.0.1:33463: read: connection reset by peer
	I1124 03:43:48.040700  492561 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-842431
	
	I1124 03:43:48.040730  492561 ubuntu.go:182] provisioning hostname "auto-842431"
	I1124 03:43:48.040794  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.058621  492561 main.go:143] libmachine: Using SSH client type: native
	I1124 03:43:48.058937  492561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 03:43:48.058948  492561 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-842431 && echo "auto-842431" | sudo tee /etc/hostname
	I1124 03:43:48.213574  492561 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-842431
	
	I1124 03:43:48.213656  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.235691  492561 main.go:143] libmachine: Using SSH client type: native
	I1124 03:43:48.236017  492561 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 03:43:48.236041  492561 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-842431' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-842431/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-842431' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 03:43:48.384657  492561 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 03:43:48.384687  492561 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21975-255205/.minikube CaCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21975-255205/.minikube}
	I1124 03:43:48.384721  492561 ubuntu.go:190] setting up certificates
	I1124 03:43:48.384730  492561 provision.go:84] configureAuth start
	I1124 03:43:48.384787  492561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-842431
	I1124 03:43:48.403581  492561 provision.go:143] copyHostCerts
	I1124 03:43:48.403647  492561 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem, removing ...
	I1124 03:43:48.403657  492561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem
	I1124 03:43:48.403998  492561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/ca.pem (1078 bytes)
	I1124 03:43:48.404166  492561 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem, removing ...
	I1124 03:43:48.404176  492561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem
	I1124 03:43:48.404208  492561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/cert.pem (1123 bytes)
	I1124 03:43:48.404270  492561 exec_runner.go:144] found /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem, removing ...
	I1124 03:43:48.404274  492561 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem
	I1124 03:43:48.404299  492561 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21975-255205/.minikube/key.pem (1675 bytes)
	I1124 03:43:48.404355  492561 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem org=jenkins.auto-842431 san=[127.0.0.1 192.168.76.2 auto-842431 localhost minikube]
	I1124 03:43:48.502552  492561 provision.go:177] copyRemoteCerts
	I1124 03:43:48.502617  492561 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 03:43:48.502655  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.520407  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:48.627546  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 03:43:48.646702  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 03:43:48.665013  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 03:43:48.685933  492561 provision.go:87] duration metric: took 301.180507ms to configureAuth
	I1124 03:43:48.685977  492561 ubuntu.go:206] setting minikube options for container-runtime
	I1124 03:43:48.686164  492561 config.go:182] Loaded profile config "auto-842431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:43:48.686180  492561 machine.go:97] duration metric: took 3.820756529s to provisionDockerMachine
	I1124 03:43:48.686187  492561 client.go:176] duration metric: took 9.714986585s to LocalClient.Create
	I1124 03:43:48.686200  492561 start.go:167] duration metric: took 9.715047845s to libmachine.API.Create "auto-842431"
	I1124 03:43:48.686210  492561 start.go:293] postStartSetup for "auto-842431" (driver="docker")
	I1124 03:43:48.686219  492561 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 03:43:48.686279  492561 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 03:43:48.686322  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.704673  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:48.808344  492561 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 03:43:48.811847  492561 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 03:43:48.811886  492561 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 03:43:48.811898  492561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/addons for local assets ...
	I1124 03:43:48.811957  492561 filesync.go:126] Scanning /home/jenkins/minikube-integration/21975-255205/.minikube/files for local assets ...
	I1124 03:43:48.812039  492561 filesync.go:149] local asset: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem -> 2570692.pem in /etc/ssl/certs
	I1124 03:43:48.812143  492561 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 03:43:48.820101  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:43:48.839925  492561 start.go:296] duration metric: took 153.699732ms for postStartSetup
	I1124 03:43:48.840315  492561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-842431
	I1124 03:43:48.857571  492561 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/config.json ...
	I1124 03:43:48.857867  492561 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:43:48.857916  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:48.876454  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:48.977861  492561 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 03:43:48.982949  492561 start.go:128] duration metric: took 10.015389692s to createHost
	I1124 03:43:48.982976  492561 start.go:83] releasing machines lock for "auto-842431", held for 10.015526333s
	I1124 03:43:48.983051  492561 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-842431
	I1124 03:43:49.002420  492561 ssh_runner.go:195] Run: cat /version.json
	I1124 03:43:49.002483  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:49.002833  492561 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 03:43:49.002916  492561 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-842431
	I1124 03:43:49.024010  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:49.025436  492561 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/auto-842431/id_rsa Username:docker}
	I1124 03:43:49.215347  492561 ssh_runner.go:195] Run: systemctl --version
	I1124 03:43:49.228660  492561 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 03:43:49.233682  492561 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 03:43:49.233763  492561 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 03:43:49.260679  492561 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1124 03:43:49.260709  492561 start.go:496] detecting cgroup driver to use...
	I1124 03:43:49.260742  492561 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1124 03:43:49.260791  492561 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1124 03:43:49.276262  492561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1124 03:43:49.289612  492561 docker.go:218] disabling cri-docker service (if available) ...
	I1124 03:43:49.289697  492561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 03:43:49.307188  492561 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 03:43:49.327639  492561 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 03:43:49.456880  492561 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 03:43:49.587435  492561 docker.go:234] disabling docker service ...
	I1124 03:43:49.587534  492561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 03:43:49.613023  492561 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 03:43:49.632840  492561 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 03:43:49.763313  492561 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 03:43:49.884560  492561 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 03:43:49.899497  492561 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 03:43:49.914190  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1124 03:43:49.924860  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1124 03:43:49.935113  492561 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1124 03:43:49.935212  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1124 03:43:49.945284  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:43:49.954733  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1124 03:43:49.964699  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1124 03:43:49.974511  492561 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 03:43:49.983768  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1124 03:43:49.993713  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1124 03:43:50.004742  492561 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1124 03:43:50.018043  492561 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 03:43:50.027253  492561 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 03:43:50.036132  492561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:43:50.185883  492561 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1124 03:43:50.338124  492561 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1124 03:43:50.338271  492561 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1124 03:43:50.345116  492561 start.go:564] Will wait 60s for crictl version
	I1124 03:43:50.345305  492561 ssh_runner.go:195] Run: which crictl
	I1124 03:43:50.350011  492561 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 03:43:50.382640  492561 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1124 03:43:50.382780  492561 ssh_runner.go:195] Run: containerd --version
	I1124 03:43:50.403186  492561 ssh_runner.go:195] Run: containerd --version
	I1124 03:43:50.442249  492561 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	W1124 03:43:46.272849  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	W1124 03:43:48.772258  482662 node_ready.go:57] node "default-k8s-diff-port-774072" has "Ready":"False" status (will retry)
	I1124 03:43:50.272818  482662 node_ready.go:49] node "default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:50.272846  482662 node_ready.go:38] duration metric: took 40.003383552s for node "default-k8s-diff-port-774072" to be "Ready" ...
	I1124 03:43:50.272860  482662 api_server.go:52] waiting for apiserver process to appear ...
	I1124 03:43:50.272916  482662 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:43:50.294447  482662 api_server.go:72] duration metric: took 41.682124385s to wait for apiserver process to appear ...
	I1124 03:43:50.294472  482662 api_server.go:88] waiting for apiserver healthz status ...
	I1124 03:43:50.294492  482662 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1124 03:43:50.304851  482662 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1124 03:43:50.306160  482662 api_server.go:141] control plane version: v1.34.1
	I1124 03:43:50.306183  482662 api_server.go:131] duration metric: took 11.704728ms to wait for apiserver health ...
	I1124 03:43:50.306192  482662 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 03:43:50.310165  482662 system_pods.go:59] 8 kube-system pods found
	I1124 03:43:50.310196  482662 system_pods.go:61] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:50.310203  482662 system_pods.go:61] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:50.310208  482662 system_pods.go:61] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:50.310212  482662 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:50.310217  482662 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:50.310221  482662 system_pods.go:61] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:50.310224  482662 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:50.310231  482662 system_pods.go:61] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:50.310244  482662 system_pods.go:74] duration metric: took 4.047109ms to wait for pod list to return data ...
	I1124 03:43:50.310252  482662 default_sa.go:34] waiting for default service account to be created ...
	I1124 03:43:50.313210  482662 default_sa.go:45] found service account: "default"
	I1124 03:43:50.313230  482662 default_sa.go:55] duration metric: took 2.972607ms for default service account to be created ...
	I1124 03:43:50.313239  482662 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 03:43:50.317249  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:50.317329  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:50.317355  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:50.317393  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:50.317419  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:50.317439  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:50.317477  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:50.317502  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:50.317524  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:50.317575  482662 retry.go:31] will retry after 204.854857ms: missing components: kube-dns
	I1124 03:43:50.534344  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:50.534377  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:50.534384  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:50.534393  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:50.534398  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:50.534402  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:50.534406  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:50.534410  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:50.534416  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:50.534431  482662 retry.go:31] will retry after 303.947041ms: missing components: kube-dns
	I1124 03:43:50.843331  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:50.843362  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:50.843368  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:50.843375  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:50.843379  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:50.843384  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:50.843388  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:50.843392  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:50.843401  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:50.843417  482662 retry.go:31] will retry after 479.793876ms: missing components: kube-dns
	I1124 03:43:50.445273  492561 cli_runner.go:164] Run: docker network inspect auto-842431 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 03:43:50.462029  492561 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 03:43:50.466606  492561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:43:50.477193  492561 kubeadm.go:884] updating cluster {Name:auto-842431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 03:43:50.477315  492561 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 03:43:50.477389  492561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:43:50.501134  492561 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:43:50.501164  492561 containerd.go:534] Images already preloaded, skipping extraction
	I1124 03:43:50.501222  492561 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 03:43:50.543116  492561 containerd.go:627] all images are preloaded for containerd runtime.
	I1124 03:43:50.543195  492561 cache_images.go:86] Images are preloaded, skipping loading
	I1124 03:43:50.543218  492561 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1124 03:43:50.543357  492561 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=auto-842431 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 03:43:50.543461  492561 ssh_runner.go:195] Run: sudo crictl info
	I1124 03:43:50.584882  492561 cni.go:84] Creating CNI manager for ""
	I1124 03:43:50.584902  492561 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 03:43:50.584923  492561 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 03:43:50.584955  492561 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-842431 NodeName:auto-842431 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 03:43:50.585075  492561 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "auto-842431"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 03:43:50.585148  492561 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 03:43:50.596696  492561 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 03:43:50.596818  492561 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 03:43:50.608916  492561 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I1124 03:43:50.631598  492561 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 03:43:50.647688  492561 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1124 03:43:50.668913  492561 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 03:43:50.678499  492561 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 03:43:50.694765  492561 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 03:43:50.871837  492561 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 03:43:50.892090  492561 certs.go:69] Setting up /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431 for IP: 192.168.76.2
	I1124 03:43:50.892160  492561 certs.go:195] generating shared ca certs ...
	I1124 03:43:50.892190  492561 certs.go:227] acquiring lock for ca certs: {Name:mk7774f5066ddc2da4b4108ade01c52c4ed6acef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:50.892393  492561 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key
	I1124 03:43:50.892506  492561 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key
	I1124 03:43:50.892557  492561 certs.go:257] generating profile certs ...
	I1124 03:43:50.892645  492561 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.key
	I1124 03:43:50.892678  492561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt with IP's: []
	I1124 03:43:51.221611  492561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt ...
	I1124 03:43:51.221688  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: {Name:mkbfb3b11fa96a1355b7693402c3d99e9c6c04f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.221933  492561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.key ...
	I1124 03:43:51.221968  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.key: {Name:mk10b89b1e440af1917b232f886f66b7dc5d07a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.222114  492561 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key.bd98c337
	I1124 03:43:51.222152  492561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt.bd98c337 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 03:43:51.737672  492561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt.bd98c337 ...
	I1124 03:43:51.737708  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt.bd98c337: {Name:mk4bcc8ea38ebd3491e24f1f2d94b9d49900983a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.737897  492561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key.bd98c337 ...
	I1124 03:43:51.737912  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key.bd98c337: {Name:mka6b3b7ed566d487f1c7e4e27f303eab953a5d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.738001  492561 certs.go:382] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt.bd98c337 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt
	I1124 03:43:51.738092  492561 certs.go:386] copying /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key.bd98c337 -> /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key
	I1124 03:43:51.738161  492561 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.key
	I1124 03:43:51.738178  492561 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.crt with IP's: []
	I1124 03:43:51.804700  492561 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.crt ...
	I1124 03:43:51.804733  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.crt: {Name:mk2e7f2853b3a0e922f6664a73c7b2940788847c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.804950  492561 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.key ...
	I1124 03:43:51.804965  492561 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.key: {Name:mkc5dc681d2349188ebb45b3500af314e4e7bb5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 03:43:51.805177  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem (1338 bytes)
	W1124 03:43:51.805227  492561 certs.go:480] ignoring /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069_empty.pem, impossibly tiny 0 bytes
	I1124 03:43:51.805240  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 03:43:51.805269  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/ca.pem (1078 bytes)
	I1124 03:43:51.805300  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/cert.pem (1123 bytes)
	I1124 03:43:51.805328  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/certs/key.pem (1675 bytes)
	I1124 03:43:51.805388  492561 certs.go:484] found cert: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem (1708 bytes)
	I1124 03:43:51.805973  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 03:43:51.824305  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 03:43:51.843804  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 03:43:51.861226  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 03:43:51.878732  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 03:43:51.912490  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 03:43:51.954531  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 03:43:51.988758  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 03:43:52.015270  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/ssl/certs/2570692.pem --> /usr/share/ca-certificates/2570692.pem (1708 bytes)
	I1124 03:43:52.037001  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 03:43:52.058103  492561 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21975-255205/.minikube/certs/257069.pem --> /usr/share/ca-certificates/257069.pem (1338 bytes)
	I1124 03:43:52.079134  492561 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 03:43:52.094176  492561 ssh_runner.go:195] Run: openssl version
	I1124 03:43:52.100777  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/257069.pem && ln -fs /usr/share/ca-certificates/257069.pem /etc/ssl/certs/257069.pem"
	I1124 03:43:52.110095  492561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/257069.pem
	I1124 03:43:52.114091  492561 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 02:58 /usr/share/ca-certificates/257069.pem
	I1124 03:43:52.114154  492561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/257069.pem
	I1124 03:43:52.155571  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/257069.pem /etc/ssl/certs/51391683.0"
	I1124 03:43:52.164152  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2570692.pem && ln -fs /usr/share/ca-certificates/2570692.pem /etc/ssl/certs/2570692.pem"
	I1124 03:43:52.173075  492561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2570692.pem
	I1124 03:43:52.178330  492561 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 02:58 /usr/share/ca-certificates/2570692.pem
	I1124 03:43:52.178432  492561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2570692.pem
	I1124 03:43:52.240601  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/2570692.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 03:43:52.250066  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 03:43:52.259257  492561 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:43:52.263359  492561 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 02:51 /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:43:52.263428  492561 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 03:43:52.307589  492561 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 03:43:52.316342  492561 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 03:43:52.320150  492561 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 03:43:52.320205  492561 kubeadm.go:401] StartCluster: {Name:auto-842431 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-842431 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:43:52.320297  492561 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1124 03:43:52.320357  492561 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 03:43:52.359372  492561 cri.go:89] found id: ""
	I1124 03:43:52.359486  492561 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 03:43:52.371930  492561 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 03:43:52.382881  492561 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 03:43:52.382995  492561 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 03:43:52.397046  492561 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 03:43:52.397122  492561 kubeadm.go:158] found existing configuration files:
	
	I1124 03:43:52.397208  492561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 03:43:52.410297  492561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 03:43:52.410404  492561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 03:43:52.418900  492561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 03:43:52.426900  492561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 03:43:52.426965  492561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 03:43:52.435169  492561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 03:43:52.444090  492561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 03:43:52.444212  492561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 03:43:52.452248  492561 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 03:43:52.460115  492561 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 03:43:52.460233  492561 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 03:43:52.468356  492561 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 03:43:52.536295  492561 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1124 03:43:52.536643  492561 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1124 03:43:52.617094  492561 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 03:43:51.329080  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:51.329113  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:51.329120  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:51.329126  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:51.329130  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:51.329134  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:51.329138  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:51.329142  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:51.329147  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:51.329173  482662 retry.go:31] will retry after 434.091686ms: missing components: kube-dns
	I1124 03:43:51.772208  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:51.772248  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 03:43:51.772256  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:51.772279  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:51.772284  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:51.772288  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:51.772292  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:51.772296  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:51.772301  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 03:43:51.772316  482662 retry.go:31] will retry after 571.716917ms: missing components: kube-dns
	I1124 03:43:52.349169  482662 system_pods.go:86] 8 kube-system pods found
	I1124 03:43:52.349198  482662 system_pods.go:89] "coredns-66bc5c9577-jgtk7" [7dea22e8-aa22-44cd-99fc-82662424e440] Running
	I1124 03:43:52.349205  482662 system_pods.go:89] "etcd-default-k8s-diff-port-774072" [9a9093f6-de4d-4735-bd20-281135932ac3] Running
	I1124 03:43:52.349211  482662 system_pods.go:89] "kindnet-2prqp" [c770b4cd-7775-4aac-aa0a-1fa63016eb77] Running
	I1124 03:43:52.349215  482662 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-774072" [ce4dfec7-ce92-4d05-89cc-9e40ed3aae3c] Running
	I1124 03:43:52.349220  482662 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-774072" [0c7f923d-d0bf-46a6-818b-cd3f6a51aa3d] Running
	I1124 03:43:52.349223  482662 system_pods.go:89] "kube-proxy-27m9s" [87f0e4dc-0625-4dc4-b724-459a8547efb5] Running
	I1124 03:43:52.349228  482662 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-774072" [1fd1d8ac-3bdd-473b-aa09-f225a8c8e34f] Running
	I1124 03:43:52.349232  482662 system_pods.go:89] "storage-provisioner" [2d4a7d3a-e840-4348-a3ce-f56234bb94c3] Running
	I1124 03:43:52.349240  482662 system_pods.go:126] duration metric: took 2.03599487s to wait for k8s-apps to be running ...
	I1124 03:43:52.349246  482662 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 03:43:52.349300  482662 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:43:52.364328  482662 system_svc.go:56] duration metric: took 15.071767ms WaitForService to wait for kubelet
	I1124 03:43:52.364355  482662 kubeadm.go:587] duration metric: took 43.752037263s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 03:43:52.364373  482662 node_conditions.go:102] verifying NodePressure condition ...
	I1124 03:43:52.368734  482662 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1124 03:43:52.368764  482662 node_conditions.go:123] node cpu capacity is 2
	I1124 03:43:52.368777  482662 node_conditions.go:105] duration metric: took 4.399677ms to run NodePressure ...
	I1124 03:43:52.368791  482662 start.go:242] waiting for startup goroutines ...
	I1124 03:43:52.368798  482662 start.go:247] waiting for cluster config update ...
	I1124 03:43:52.368809  482662 start.go:256] writing updated cluster config ...
	I1124 03:43:52.369096  482662 ssh_runner.go:195] Run: rm -f paused
	I1124 03:43:52.374616  482662 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:43:52.378601  482662 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jgtk7" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.385966  482662 pod_ready.go:94] pod "coredns-66bc5c9577-jgtk7" is "Ready"
	I1124 03:43:52.386042  482662 pod_ready.go:86] duration metric: took 7.419817ms for pod "coredns-66bc5c9577-jgtk7" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.388928  482662 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.396396  482662 pod_ready.go:94] pod "etcd-default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:52.396490  482662 pod_ready.go:86] duration metric: took 7.491489ms for pod "etcd-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.400005  482662 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.405564  482662 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:52.405639  482662 pod_ready.go:86] duration metric: took 5.604387ms for pod "kube-apiserver-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.409289  482662 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.779984  482662 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:52.780014  482662 pod_ready.go:86] duration metric: took 370.652265ms for pod "kube-controller-manager-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:52.980908  482662 pod_ready.go:83] waiting for pod "kube-proxy-27m9s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:53.380832  482662 pod_ready.go:94] pod "kube-proxy-27m9s" is "Ready"
	I1124 03:43:53.380860  482662 pod_ready.go:86] duration metric: took 399.927379ms for pod "kube-proxy-27m9s" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:53.580314  482662 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:53.979496  482662 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-774072" is "Ready"
	I1124 03:43:53.979572  482662 pod_ready.go:86] duration metric: took 399.230209ms for pod "kube-scheduler-default-k8s-diff-port-774072" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 03:43:53.979609  482662 pod_ready.go:40] duration metric: took 1.60496335s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 03:43:54.063525  482662 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1124 03:43:54.067108  482662 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-774072" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	56f984231e308       1611cd07b61d5       11 seconds ago       Running             busybox                   0                   f2eb65427376f       busybox                                                default
	00e39fe2c36ce       138784d87c9c5       18 seconds ago       Running             coredns                   0                   a521b3af92233       coredns-66bc5c9577-jgtk7                               kube-system
	3aedae55cebdd       ba04bb24b9575       18 seconds ago       Running             storage-provisioner       0                   d5714bd29c09f       storage-provisioner                                    kube-system
	a0a1945067a8f       05baa95f5142d       58 seconds ago       Running             kube-proxy                0                   26be55cf0f473       kube-proxy-27m9s                                       kube-system
	dc1ff819cdf52       b1a8c6f707935       59 seconds ago       Running             kindnet-cni               0                   a1df1ff74e137       kindnet-2prqp                                          kube-system
	e9f0e2a57bbbc       a1894772a478e       About a minute ago   Running             etcd                      0                   89a5cc6d3af5a       etcd-default-k8s-diff-port-774072                      kube-system
	52a07b03fab83       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   be9e9c09873ed       kube-apiserver-default-k8s-diff-port-774072            kube-system
	ba331bec1dcb5       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   77f739542d214       kube-scheduler-default-k8s-diff-port-774072            kube-system
	bd316fbcb0bdb       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   366e108c422cc       kube-controller-manager-default-k8s-diff-port-774072   kube-system
	
	
	==> containerd <==
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.743841732Z" level=info msg="CreateContainer within sandbox \"d5714bd29c09f292dec1088cdb6f274865a92047b0ef73f81bbf3ec14e678fac\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5\""
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.744966384Z" level=info msg="StartContainer for \"3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5\""
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.746792710Z" level=info msg="connecting to shim 3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5" address="unix:///run/containerd/s/1d4e6b00a22d8603ea8830cf7c22e0c059b4606c5a061f41ce0704d75d309a92" protocol=ttrpc version=3
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.760780818Z" level=info msg="Container 00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.781991031Z" level=info msg="CreateContainer within sandbox \"a521b3af92233747f693570dda8b3453286615fcc9cf60488afa868c9389cb01\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65\""
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.787261319Z" level=info msg="StartContainer for \"00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65\""
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.792709118Z" level=info msg="connecting to shim 00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65" address="unix:///run/containerd/s/07f043158c6587a75d849ba277381802749153ec92053492d6aa582a8f009099" protocol=ttrpc version=3
	Nov 24 03:43:50 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:50.963186631Z" level=info msg="StartContainer for \"3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5\" returns successfully"
	Nov 24 03:43:51 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:51.050302245Z" level=info msg="StartContainer for \"00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65\" returns successfully"
	Nov 24 03:43:54 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:54.666488277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:362be5db-8e55-42d3-af79-d334755f6b33,Namespace:default,Attempt:0,}"
	Nov 24 03:43:54 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:54.752408709Z" level=info msg="connecting to shim f2eb65427376f46f3c4b42ee8e956f570dab41e978d79490f7db2c4cdda76dbc" address="unix:///run/containerd/s/917b63682aeac47978163afc8198993b032393ce7f3fa36e23c01e0492e45405" namespace=k8s.io protocol=ttrpc version=3
	Nov 24 03:43:54 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:54.884975934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:362be5db-8e55-42d3-af79-d334755f6b33,Namespace:default,Attempt:0,} returns sandbox id \"f2eb65427376f46f3c4b42ee8e956f570dab41e978d79490f7db2c4cdda76dbc\""
	Nov 24 03:43:54 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:54.890794944Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.252276188Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.254321388Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937187"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.256784338Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.260701879Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.261933821Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.370953941s"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.262262784Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.275817773Z" level=info msg="CreateContainer within sandbox \"f2eb65427376f46f3c4b42ee8e956f570dab41e978d79490f7db2c4cdda76dbc\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.290384429Z" level=info msg="Container 56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26: CDI devices from CRI Config.CDIDevices: []"
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.301020662Z" level=info msg="CreateContainer within sandbox \"f2eb65427376f46f3c4b42ee8e956f570dab41e978d79490f7db2c4cdda76dbc\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26\""
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.304634430Z" level=info msg="StartContainer for \"56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26\""
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.307364492Z" level=info msg="connecting to shim 56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26" address="unix:///run/containerd/s/917b63682aeac47978163afc8198993b032393ce7f3fa36e23c01e0492e45405" protocol=ttrpc version=3
	Nov 24 03:43:57 default-k8s-diff-port-774072 containerd[758]: time="2025-11-24T03:43:57.408995736Z" level=info msg="StartContainer for \"56f984231e3084caf3d8e535f57d381e60acfbd8638a14f79f050f8809017a26\" returns successfully"
	
	
	==> coredns [00e39fe2c36cef85d92e8cc8e38126ed7b94614eea6c660bf51a185902ce5e65] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53503 - 55369 "HINFO IN 8793316064930919524.7051367551294291303. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031857408s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-774072
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-774072
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=525fef2394fe4854b27b3c3385e33403fd802864
	                    minikube.k8s.io/name=default-k8s-diff-port-774072
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T03_43_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 03:42:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-774072
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 03:44:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 03:44:05 +0000   Mon, 24 Nov 2025 03:42:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 03:44:05 +0000   Mon, 24 Nov 2025 03:42:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 03:44:05 +0000   Mon, 24 Nov 2025 03:42:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 03:44:05 +0000   Mon, 24 Nov 2025 03:43:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-774072
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 304a86241bf1bbb85bd31db5692386d7
	  System UUID:                b34292c3-f00c-4314-8f46-89239011216f
	  Boot ID:                    63a8a852-1462-44b1-9d6f-f77d26e8568f
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-jgtk7                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     61s
	  kube-system                 etcd-default-k8s-diff-port-774072                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         69s
	  kube-system                 kindnet-2prqp                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      61s
	  kube-system                 kube-apiserver-default-k8s-diff-port-774072             250m (12%)    0 (0%)      0 (0%)           0 (0%)         66s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-774072    200m (10%)    0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 kube-proxy-27m9s                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-scheduler-default-k8s-diff-port-774072             100m (5%)     0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 58s                kube-proxy       
	  Normal   NodeAllocatableEnforced  78s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 78s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  78s (x8 over 78s)  kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    78s (x8 over 78s)  kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     78s (x7 over 78s)  kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasSufficientPID
	  Normal   Starting                 78s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 66s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 66s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  66s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  66s                kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    66s                kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     66s                kubelet          Node default-k8s-diff-port-774072 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           62s                node-controller  Node default-k8s-diff-port-774072 event: Registered Node default-k8s-diff-port-774072 in Controller
	  Normal   NodeReady                19s                kubelet          Node default-k8s-diff-port-774072 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov24 02:27] overlayfs: idmapped layers are currently not supported
	[Nov24 02:28] overlayfs: idmapped layers are currently not supported
	[Nov24 02:30] overlayfs: idmapped layers are currently not supported
	[  +9.824160] overlayfs: idmapped layers are currently not supported
	[Nov24 02:31] overlayfs: idmapped layers are currently not supported
	[Nov24 02:32] overlayfs: idmapped layers are currently not supported
	[ +27.981383] overlayfs: idmapped layers are currently not supported
	[Nov24 02:33] overlayfs: idmapped layers are currently not supported
	[Nov24 02:34] overlayfs: idmapped layers are currently not supported
	[Nov24 02:35] overlayfs: idmapped layers are currently not supported
	[Nov24 02:36] overlayfs: idmapped layers are currently not supported
	[Nov24 02:37] overlayfs: idmapped layers are currently not supported
	[Nov24 02:38] overlayfs: idmapped layers are currently not supported
	[Nov24 02:39] overlayfs: idmapped layers are currently not supported
	[ +24.837346] overlayfs: idmapped layers are currently not supported
	[Nov24 02:40] overlayfs: idmapped layers are currently not supported
	[ +40.823948] overlayfs: idmapped layers are currently not supported
	[  +1.705989] overlayfs: idmapped layers are currently not supported
	[Nov24 02:42] overlayfs: idmapped layers are currently not supported
	[ +21.661904] overlayfs: idmapped layers are currently not supported
	[Nov24 02:44] overlayfs: idmapped layers are currently not supported
	[  +1.074777] overlayfs: idmapped layers are currently not supported
	[Nov24 02:46] overlayfs: idmapped layers are currently not supported
	[ +19.120392] overlayfs: idmapped layers are currently not supported
	[Nov24 02:48] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [e9f0e2a57bbbcfee223e5d178a062cb80bd6225e0085b14ac5aad3d60b31cd5b] <==
	{"level":"warn","ts":"2025-11-24T03:42:57.353674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.419671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.434775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.508920Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.526830Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.563981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.586751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49394","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.633885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.714856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49456","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.726991Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.761385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.788901Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.821626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.854213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.900172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.934615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.961185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:57.998910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.015491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.044567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.076248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.093025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.122439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.140857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T03:42:58.280812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49720","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 03:44:09 up  2:26,  0 user,  load average: 6.06, 4.49, 3.43
	Linux default-k8s-diff-port-774072 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [dc1ff819cdf5287b5b8789ac09225c931ea261a4c3ac9d97c8ebffbd5e511c42] <==
	I1124 03:43:09.826449       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 03:43:09.827945       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 03:43:09.828104       1 main.go:148] setting mtu 1500 for CNI 
	I1124 03:43:09.828117       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 03:43:09.828132       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T03:43:10Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 03:43:10.035204       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 03:43:10.035226       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 03:43:10.035234       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 03:43:10.035554       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 03:43:40.039763       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 03:43:40.039932       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 03:43:40.040010       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 03:43:40.040090       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1124 03:43:41.736228       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 03:43:41.736266       1 metrics.go:72] Registering metrics
	I1124 03:43:41.736334       1 controller.go:711] "Syncing nftables rules"
	I1124 03:43:50.037082       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:43:50.037136       1 main.go:301] handling current node
	I1124 03:44:00.036867       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 03:44:00.036916       1 main.go:301] handling current node
	
	
	==> kube-apiserver [52a07b03fab83f0354b99cda2a352a848f5d743749226cd71a933518934bfc74] <==
	I1124 03:42:59.672116       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	E1124 03:42:59.690788       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 03:42:59.695640       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:42:59.755468       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:42:59.756353       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 03:42:59.769579       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 03:42:59.898868       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 03:43:00.118776       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 03:43:00.132790       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 03:43:00.133954       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 03:43:01.698758       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 03:43:01.778227       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 03:43:01.859635       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 03:43:01.871106       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 03:43:01.872628       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 03:43:01.878264       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 03:43:02.459070       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 03:43:03.081760       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 03:43:03.107001       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 03:43:03.122860       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 03:43:07.806068       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 03:43:08.209243       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:43:08.217187       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 03:43:08.626944       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 03:44:04.585382       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:54964: use of closed network connection
	
	
	==> kube-controller-manager [bd316fbcb0bdb1412b6625831cdf21ae957836aeb36b4dfb316548f954754911] <==
	I1124 03:43:07.602459       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 03:43:07.602741       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 03:43:07.603243       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 03:43:07.592743       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 03:43:07.609702       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 03:43:07.611670       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 03:43:07.620963       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 03:43:07.622154       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 03:43:07.627180       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 03:43:07.636309       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-774072" podCIDRs=["10.244.0.0/24"]
	I1124 03:43:07.637769       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:43:07.650004       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 03:43:07.650313       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 03:43:07.650539       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 03:43:07.650605       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 03:43:07.650668       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 03:43:07.650676       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 03:43:07.650681       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 03:43:07.650752       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 03:43:07.651345       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 03:43:07.651925       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 03:43:07.652004       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 03:43:07.652034       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 03:43:07.652255       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 03:43:52.608336       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a0a1945067a8f9db6c41a9276bcaf5758a935d836a628957d0257595ad0648f6] <==
	I1124 03:43:10.518396       1 server_linux.go:53] "Using iptables proxy"
	I1124 03:43:10.598365       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 03:43:10.699367       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 03:43:10.699411       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 03:43:10.699524       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 03:43:10.725223       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 03:43:10.725297       1 server_linux.go:132] "Using iptables Proxier"
	I1124 03:43:10.730350       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 03:43:10.730807       1 server.go:527] "Version info" version="v1.34.1"
	I1124 03:43:10.730833       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 03:43:10.733874       1 config.go:200] "Starting service config controller"
	I1124 03:43:10.733900       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 03:43:10.733962       1 config.go:106] "Starting endpoint slice config controller"
	I1124 03:43:10.733977       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 03:43:10.734000       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 03:43:10.734032       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 03:43:10.738138       1 config.go:309] "Starting node config controller"
	I1124 03:43:10.738175       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 03:43:10.738184       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 03:43:10.834127       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 03:43:10.834142       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 03:43:10.834180       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [ba331bec1dcb5982c85300f5dc3a0d66515f5e944f0be718440710fb98498763] <==
	E1124 03:42:59.837441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 03:42:59.837507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:42:59.837571       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:42:59.837638       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:42:59.837692       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 03:42:59.837759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 03:42:59.837828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 03:42:59.837977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:42:59.838046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 03:42:59.838172       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:42:59.838241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 03:42:59.838298       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 03:42:59.838390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 03:43:00.669568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 03:43:00.671757       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 03:43:00.774376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 03:43:00.830022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 03:43:00.874430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 03:43:00.938218       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 03:43:00.998388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 03:43:01.039727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 03:43:01.043646       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1124 03:43:01.094568       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 03:43:01.228878       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1124 03:43:02.978659       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 03:43:04 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:04.623522    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-774072" podStartSLOduration=2.62346106 podStartE2EDuration="2.62346106s" podCreationTimestamp="2025-11-24 03:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:04.585232515 +0000 UTC m=+1.544026592" watchObservedRunningTime="2025-11-24 03:43:04.62346106 +0000 UTC m=+1.582255121"
	Nov 24 03:43:04 default-k8s-diff-port-774072 kubelet[1454]: E1124 03:43:04.647742    1454 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-774072\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-774072"
	Nov 24 03:43:07 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:07.682284    1454 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 03:43:07 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:07.683415    1454 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.792729    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c770b4cd-7775-4aac-aa0a-1fa63016eb77-cni-cfg\") pod \"kindnet-2prqp\" (UID: \"c770b4cd-7775-4aac-aa0a-1fa63016eb77\") " pod="kube-system/kindnet-2prqp"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.792771    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c770b4cd-7775-4aac-aa0a-1fa63016eb77-lib-modules\") pod \"kindnet-2prqp\" (UID: \"c770b4cd-7775-4aac-aa0a-1fa63016eb77\") " pod="kube-system/kindnet-2prqp"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.792814    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqwdv\" (UniqueName: \"kubernetes.io/projected/c770b4cd-7775-4aac-aa0a-1fa63016eb77-kube-api-access-mqwdv\") pod \"kindnet-2prqp\" (UID: \"c770b4cd-7775-4aac-aa0a-1fa63016eb77\") " pod="kube-system/kindnet-2prqp"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.792837    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c770b4cd-7775-4aac-aa0a-1fa63016eb77-xtables-lock\") pod \"kindnet-2prqp\" (UID: \"c770b4cd-7775-4aac-aa0a-1fa63016eb77\") " pod="kube-system/kindnet-2prqp"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: E1124 03:43:08.796909    1454 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:default-k8s-diff-port-774072\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'default-k8s-diff-port-774072' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.900806    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/87f0e4dc-0625-4dc4-b724-459a8547efb5-kube-proxy\") pod \"kube-proxy-27m9s\" (UID: \"87f0e4dc-0625-4dc4-b724-459a8547efb5\") " pod="kube-system/kube-proxy-27m9s"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.900854    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9mj\" (UniqueName: \"kubernetes.io/projected/87f0e4dc-0625-4dc4-b724-459a8547efb5-kube-api-access-4f9mj\") pod \"kube-proxy-27m9s\" (UID: \"87f0e4dc-0625-4dc4-b724-459a8547efb5\") " pod="kube-system/kube-proxy-27m9s"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.900922    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87f0e4dc-0625-4dc4-b724-459a8547efb5-lib-modules\") pod \"kube-proxy-27m9s\" (UID: \"87f0e4dc-0625-4dc4-b724-459a8547efb5\") " pod="kube-system/kube-proxy-27m9s"
	Nov 24 03:43:08 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:08.900940    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87f0e4dc-0625-4dc4-b724-459a8547efb5-xtables-lock\") pod \"kube-proxy-27m9s\" (UID: \"87f0e4dc-0625-4dc4-b724-459a8547efb5\") " pod="kube-system/kube-proxy-27m9s"
	Nov 24 03:43:09 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:09.023416    1454 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 24 03:43:10 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:10.681563    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-27m9s" podStartSLOduration=2.681546414 podStartE2EDuration="2.681546414s" podCreationTimestamp="2025-11-24 03:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:10.677195049 +0000 UTC m=+7.635989159" watchObservedRunningTime="2025-11-24 03:43:10.681546414 +0000 UTC m=+7.640340475"
	Nov 24 03:43:11 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:11.689499    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2prqp" podStartSLOduration=3.689477365 podStartE2EDuration="3.689477365s" podCreationTimestamp="2025-11-24 03:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:10.69580657 +0000 UTC m=+7.654600623" watchObservedRunningTime="2025-11-24 03:43:11.689477365 +0000 UTC m=+8.648271426"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.142158    1454 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.220169    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/2d4a7d3a-e840-4348-a3ce-f56234bb94c3-tmp\") pod \"storage-provisioner\" (UID: \"2d4a7d3a-e840-4348-a3ce-f56234bb94c3\") " pod="kube-system/storage-provisioner"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.220336    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf4ls\" (UniqueName: \"kubernetes.io/projected/2d4a7d3a-e840-4348-a3ce-f56234bb94c3-kube-api-access-bf4ls\") pod \"storage-provisioner\" (UID: \"2d4a7d3a-e840-4348-a3ce-f56234bb94c3\") " pod="kube-system/storage-provisioner"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.321034    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bjw8\" (UniqueName: \"kubernetes.io/projected/7dea22e8-aa22-44cd-99fc-82662424e440-kube-api-access-7bjw8\") pod \"coredns-66bc5c9577-jgtk7\" (UID: \"7dea22e8-aa22-44cd-99fc-82662424e440\") " pod="kube-system/coredns-66bc5c9577-jgtk7"
	Nov 24 03:43:50 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:50.321312    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dea22e8-aa22-44cd-99fc-82662424e440-config-volume\") pod \"coredns-66bc5c9577-jgtk7\" (UID: \"7dea22e8-aa22-44cd-99fc-82662424e440\") " pod="kube-system/coredns-66bc5c9577-jgtk7"
	Nov 24 03:43:51 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:51.941314    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.941295047 podStartE2EDuration="41.941295047s" podCreationTimestamp="2025-11-24 03:43:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:51.941053273 +0000 UTC m=+48.899847334" watchObservedRunningTime="2025-11-24 03:43:51.941295047 +0000 UTC m=+48.900089100"
	Nov 24 03:43:51 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:51.941449    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jgtk7" podStartSLOduration=43.941441485 podStartE2EDuration="43.941441485s" podCreationTimestamp="2025-11-24 03:43:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 03:43:51.871328022 +0000 UTC m=+48.830122099" watchObservedRunningTime="2025-11-24 03:43:51.941441485 +0000 UTC m=+48.900235546"
	Nov 24 03:43:54 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:54.376573    1454 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74tw4\" (UniqueName: \"kubernetes.io/projected/362be5db-8e55-42d3-af79-d334755f6b33-kube-api-access-74tw4\") pod \"busybox\" (UID: \"362be5db-8e55-42d3-af79-d334755f6b33\") " pod="default/busybox"
	Nov 24 03:43:57 default-k8s-diff-port-774072 kubelet[1454]: I1124 03:43:57.823050    1454 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.44810191 podStartE2EDuration="3.823022524s" podCreationTimestamp="2025-11-24 03:43:54 +0000 UTC" firstStartedPulling="2025-11-24 03:43:54.889742473 +0000 UTC m=+51.848536534" lastFinishedPulling="2025-11-24 03:43:57.264663096 +0000 UTC m=+54.223457148" observedRunningTime="2025-11-24 03:43:57.822342322 +0000 UTC m=+54.781136383" watchObservedRunningTime="2025-11-24 03:43:57.823022524 +0000 UTC m=+54.781816576"
	
	
	==> storage-provisioner [3aedae55cebdd5950afd18ab88e0fc52c1e0f20f2ea0d2b19e5d4e1fdc8cd0d5] <==
	W1124 03:43:51.012203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:43:51.013174       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 03:43:51.016095       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-774072_0db07085-7e4b-4b72-b965-36b9f8c3eb11!
	I1124 03:43:51.024931       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3251c2da-243b-4662-8625-678dc3c80640", APIVersion:"v1", ResourceVersion:"460", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-774072_0db07085-7e4b-4b72-b965-36b9f8c3eb11 became leader
	W1124 03:43:51.026031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:51.039740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 03:43:51.121702       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-774072_0db07085-7e4b-4b72-b965-36b9f8c3eb11!
	W1124 03:43:53.043622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:53.052567       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:55.057028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:55.063272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:57.067294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:57.073187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:59.076218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:43:59.082363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:01.087560       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:01.097571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:03.103470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:03.111050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:05.114844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:05.134316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:07.137786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:07.144141       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:09.152686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 03:44:09.164757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-774072 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (16.61s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 36.56
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 38.25
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.1
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 174.43
29 TestAddons/serial/Volcano 39.73
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 8.86
35 TestAddons/parallel/Registry 17.36
36 TestAddons/parallel/RegistryCreds 0.78
37 TestAddons/parallel/Ingress 19.8
38 TestAddons/parallel/InspektorGadget 11.81
39 TestAddons/parallel/MetricsServer 5.79
41 TestAddons/parallel/CSI 49.35
42 TestAddons/parallel/Headlamp 11.28
43 TestAddons/parallel/CloudSpanner 5.66
44 TestAddons/parallel/LocalPath 51.6
45 TestAddons/parallel/NvidiaDevicePlugin 6.56
46 TestAddons/parallel/Yakd 11.85
48 TestAddons/StoppedEnableDisable 12.34
49 TestCertOptions 38.59
50 TestCertExpiration 233.77
52 TestForceSystemdFlag 36.52
53 TestForceSystemdEnv 43.54
54 TestDockerEnvContainerd 46.44
58 TestErrorSpam/setup 32.18
59 TestErrorSpam/start 0.82
60 TestErrorSpam/status 1.2
61 TestErrorSpam/pause 1.75
62 TestErrorSpam/unpause 1.82
63 TestErrorSpam/stop 1.61
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 79.86
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.1
70 TestFunctional/serial/KubeContext 0.08
71 TestFunctional/serial/KubectlGetPods 0.1
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.46
75 TestFunctional/serial/CacheCmd/cache/add_local 1.31
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.34
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.13
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.15
83 TestFunctional/serial/ExtraConfig 50.26
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.48
86 TestFunctional/serial/LogsFileCmd 1.74
87 TestFunctional/serial/InvalidService 4.78
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 7.56
91 TestFunctional/parallel/DryRun 0.57
92 TestFunctional/parallel/InternationalLanguage 0.31
93 TestFunctional/parallel/StatusCmd 1.28
97 TestFunctional/parallel/ServiceCmdConnect 8.77
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 24.25
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.18
104 TestFunctional/parallel/FileSync 0.38
105 TestFunctional/parallel/CertSync 2.37
109 TestFunctional/parallel/NodeLabels 0.11
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.83
113 TestFunctional/parallel/License 0.34
114 TestFunctional/parallel/Version/short 0.24
115 TestFunctional/parallel/Version/components 1.48
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.3
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.29
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.28
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.18
121 TestFunctional/parallel/ImageCommands/Setup 0.69
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.21
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.22
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.43
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.34
127 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.88
129 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.65
130 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
132 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.49
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.93
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.75
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
143 TestFunctional/parallel/ServiceCmd/DeployApp 6.24
144 TestFunctional/parallel/ServiceCmd/List 0.52
145 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
146 TestFunctional/parallel/ServiceCmd/HTTPS 0.45
147 TestFunctional/parallel/ServiceCmd/Format 0.51
148 TestFunctional/parallel/ProfileCmd/profile_not_create 0.56
149 TestFunctional/parallel/ServiceCmd/URL 0.53
150 TestFunctional/parallel/ProfileCmd/profile_list 0.75
151 TestFunctional/parallel/MountCmd/any-port 7.99
152 TestFunctional/parallel/ProfileCmd/profile_json_output 0.57
153 TestFunctional/parallel/MountCmd/specific-port 2.32
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.5
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 168.48
163 TestMultiControlPlane/serial/DeployApp 43.92
164 TestMultiControlPlane/serial/PingHostFromPods 1.67
165 TestMultiControlPlane/serial/AddWorkerNode 61.2
166 TestMultiControlPlane/serial/NodeLabels 0.1
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.09
168 TestMultiControlPlane/serial/CopyFile 20.81
169 TestMultiControlPlane/serial/StopSecondaryNode 2.28
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.81
171 TestMultiControlPlane/serial/RestartSecondaryNode 12.52
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.81
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 91.37
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.6
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 25.41
177 TestMultiControlPlane/serial/RestartCluster 60.57
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
179 TestMultiControlPlane/serial/AddSecondaryNode 89.72
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.09
185 TestJSONOutput/start/Command 78.81
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.73
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.66
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.09
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.25
210 TestKicCustomNetwork/create_custom_network 39.77
211 TestKicCustomNetwork/use_default_bridge_network 36.55
212 TestKicExistingNetwork 37.62
213 TestKicCustomSubnet 35.83
214 TestKicStaticIP 34.05
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 75.23
219 TestMountStart/serial/StartWithMountFirst 5.94
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.34
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.72
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.3
226 TestMountStart/serial/RestartStopped 7.77
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 108.22
231 TestMultiNode/serial/DeployApp2Nodes 5.86
232 TestMultiNode/serial/PingHostFrom2Pods 1.03
233 TestMultiNode/serial/AddNode 27.3
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.75
236 TestMultiNode/serial/CopyFile 10.66
237 TestMultiNode/serial/StopNode 2.63
238 TestMultiNode/serial/StartAfterStop 7.72
239 TestMultiNode/serial/RestartKeepsNodes 83.79
240 TestMultiNode/serial/DeleteNode 5.68
241 TestMultiNode/serial/StopMultiNode 24.07
242 TestMultiNode/serial/RestartMultiNode 50.88
243 TestMultiNode/serial/ValidateNameConflict 36.46
248 TestPreload 150.95
250 TestScheduledStopUnix 110.53
253 TestInsufficientStorage 10.31
254 TestRunningBinaryUpgrade 63.33
256 TestKubernetesUpgrade 349.77
257 TestMissingContainerUpgrade 158.71
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
260 TestNoKubernetes/serial/StartWithK8s 41.13
261 TestNoKubernetes/serial/StartWithStopK8s 17.63
262 TestNoKubernetes/serial/Start 8.12
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
265 TestNoKubernetes/serial/ProfileList 1.21
266 TestNoKubernetes/serial/Stop 1.41
267 TestNoKubernetes/serial/StartNoArgs 8.42
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
269 TestStoppedBinaryUpgrade/Setup 0.7
270 TestStoppedBinaryUpgrade/Upgrade 64.17
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.77
280 TestPause/serial/Start 51.33
281 TestPause/serial/SecondStartNoReconfiguration 6.18
282 TestPause/serial/Pause 0.79
283 TestPause/serial/VerifyStatus 0.45
284 TestPause/serial/Unpause 0.68
285 TestPause/serial/PauseAgain 0.83
286 TestPause/serial/DeletePaused 3.06
287 TestPause/serial/VerifyDeletedResources 0.41
295 TestNetworkPlugins/group/false 4.9
300 TestStartStop/group/old-k8s-version/serial/FirstStart 63.98
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.19
303 TestStartStop/group/old-k8s-version/serial/Stop 12.1
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
305 TestStartStop/group/old-k8s-version/serial/SecondStart 50.85
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.3
309 TestStartStop/group/old-k8s-version/serial/Pause 3.18
311 TestStartStop/group/no-preload/serial/FirstStart 67.84
313 TestStartStop/group/embed-certs/serial/FirstStart 58.16
316 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
317 TestStartStop/group/no-preload/serial/Stop 12.2
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
319 TestStartStop/group/embed-certs/serial/Stop 12.24
320 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
321 TestStartStop/group/no-preload/serial/SecondStart 54.83
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.35
323 TestStartStop/group/embed-certs/serial/SecondStart 54.75
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.11
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
327 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.26
328 TestStartStop/group/no-preload/serial/Pause 3.1
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.17
331 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 88.27
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
333 TestStartStop/group/embed-certs/serial/Pause 3.97
335 TestStartStop/group/newest-cni/serial/FirstStart 39.68
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.01
338 TestStartStop/group/newest-cni/serial/Stop 1.37
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
340 TestStartStop/group/newest-cni/serial/SecondStart 15.74
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
344 TestStartStop/group/newest-cni/serial/Pause 3.27
345 TestNetworkPlugins/group/auto/Start 83.96
347 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.33
348 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.55
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.7
351 TestNetworkPlugins/group/auto/KubeletFlags 0.3
352 TestNetworkPlugins/group/auto/NetCatPod 10.31
353 TestNetworkPlugins/group/auto/DNS 0.2
354 TestNetworkPlugins/group/auto/Localhost 0.16
355 TestNetworkPlugins/group/auto/HairPin 0.15
356 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
357 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.16
358 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.32
359 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.37
360 TestNetworkPlugins/group/kindnet/Start 87.1
361 TestNetworkPlugins/group/calico/Start 63.64
362 TestNetworkPlugins/group/calico/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/KubeletFlags 0.32
364 TestNetworkPlugins/group/calico/NetCatPod 9.28
365 TestNetworkPlugins/group/calico/DNS 0.19
366 TestNetworkPlugins/group/calico/Localhost 0.16
367 TestNetworkPlugins/group/calico/HairPin 0.16
368 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
369 TestNetworkPlugins/group/kindnet/KubeletFlags 0.47
370 TestNetworkPlugins/group/kindnet/NetCatPod 10.37
371 TestNetworkPlugins/group/kindnet/DNS 0.3
372 TestNetworkPlugins/group/kindnet/Localhost 0.34
373 TestNetworkPlugins/group/kindnet/HairPin 0.28
374 TestNetworkPlugins/group/custom-flannel/Start 62.46
375 TestNetworkPlugins/group/enable-default-cni/Start 75.66
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.43
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.28
378 TestNetworkPlugins/group/custom-flannel/DNS 0.17
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
381 TestNetworkPlugins/group/flannel/Start 65.63
382 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.64
383 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.33
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
387 TestNetworkPlugins/group/bridge/Start 72.87
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.34
390 TestNetworkPlugins/group/flannel/NetCatPod 10.42
391 TestNetworkPlugins/group/flannel/DNS 0.18
392 TestNetworkPlugins/group/flannel/Localhost 0.16
393 TestNetworkPlugins/group/flannel/HairPin 0.15
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
395 TestNetworkPlugins/group/bridge/NetCatPod 9.26
396 TestNetworkPlugins/group/bridge/DNS 0.17
397 TestNetworkPlugins/group/bridge/Localhost 0.14
398 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (36.56s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-023894 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-023894 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (36.55817161s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (36.56s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 02:50:12.925896  257069 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1124 02:50:12.925978  257069 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-023894
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-023894: exit status 85 (84.889282ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-023894 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-023894 │ jenkins │ v1.37.0 │ 24 Nov 25 02:49 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:49:36
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:49:36.413369  257074 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:49:36.413500  257074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:49:36.413536  257074 out.go:374] Setting ErrFile to fd 2...
	I1124 02:49:36.413550  257074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:49:36.413814  257074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	W1124 02:49:36.413937  257074 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21975-255205/.minikube/config/config.json: open /home/jenkins/minikube-integration/21975-255205/.minikube/config/config.json: no such file or directory
	I1124 02:49:36.414336  257074 out.go:368] Setting JSON to true
	I1124 02:49:36.415133  257074 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5505,"bootTime":1763947072,"procs":153,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 02:49:36.415200  257074 start.go:143] virtualization:  
	I1124 02:49:36.421569  257074 out.go:99] [download-only-023894] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1124 02:49:36.421744  257074 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 02:49:36.421801  257074 notify.go:221] Checking for updates...
	I1124 02:49:36.425344  257074 out.go:171] MINIKUBE_LOCATION=21975
	I1124 02:49:36.428676  257074 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:49:36.432722  257074 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 02:49:36.435740  257074 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 02:49:36.438885  257074 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 02:49:36.444937  257074 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 02:49:36.445225  257074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:49:36.475271  257074 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 02:49:36.475369  257074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:49:36.532814  257074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 02:49:36.523321387 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 02:49:36.532919  257074 docker.go:319] overlay module found
	I1124 02:49:36.535996  257074 out.go:99] Using the docker driver based on user configuration
	I1124 02:49:36.536055  257074 start.go:309] selected driver: docker
	I1124 02:49:36.536073  257074 start.go:927] validating driver "docker" against <nil>
	I1124 02:49:36.536180  257074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:49:36.596833  257074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-24 02:49:36.58743011 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 02:49:36.596990  257074 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:49:36.597281  257074 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 02:49:36.597444  257074 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 02:49:36.600535  257074 out.go:171] Using Docker driver with root privileges
	I1124 02:49:36.603527  257074 cni.go:84] Creating CNI manager for ""
	I1124 02:49:36.603601  257074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:49:36.603617  257074 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 02:49:36.603705  257074 start.go:353] cluster config:
	{Name:download-only-023894 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-023894 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:49:36.606727  257074 out.go:99] Starting "download-only-023894" primary control-plane node in "download-only-023894" cluster
	I1124 02:49:36.606748  257074 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 02:49:36.609637  257074 out.go:99] Pulling base image v0.0.48-1763935653-21975 ...
	I1124 02:49:36.609686  257074 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 02:49:36.609781  257074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 02:49:36.625907  257074 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 02:49:36.626099  257074 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 02:49:36.626196  257074 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 02:49:36.661141  257074 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1124 02:49:36.661175  257074 cache.go:65] Caching tarball of preloaded images
	I1124 02:49:36.661346  257074 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1124 02:49:36.664603  257074 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1124 02:49:36.664638  257074 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1124 02:49:36.846987  257074 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1124 02:49:36.847145  257074 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1124 02:49:41.722961  257074 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	
	
	* The control-plane node download-only-023894 host does not exist
	  To start a cluster, run: "minikube start -p download-only-023894"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-023894
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (38.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-739794 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-739794 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (38.250561016s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (38.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 02:50:51.618702  257069 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1124 02:50:51.618737  257069 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-739794
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-739794: exit status 85 (96.23438ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-023894 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-023894 │ jenkins │ v1.37.0 │ 24 Nov 25 02:49 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 02:50 UTC │ 24 Nov 25 02:50 UTC │
	│ delete  │ -p download-only-023894                                                                                                                                                               │ download-only-023894 │ jenkins │ v1.37.0 │ 24 Nov 25 02:50 UTC │ 24 Nov 25 02:50 UTC │
	│ start   │ -o=json --download-only -p download-only-739794 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-739794 │ jenkins │ v1.37.0 │ 24 Nov 25 02:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 02:50:13
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 02:50:13.409147  257275 out.go:360] Setting OutFile to fd 1 ...
	I1124 02:50:13.409349  257275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:50:13.409377  257275 out.go:374] Setting ErrFile to fd 2...
	I1124 02:50:13.409397  257275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 02:50:13.409805  257275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 02:50:13.410601  257275 out.go:368] Setting JSON to true
	I1124 02:50:13.411438  257275 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5542,"bootTime":1763947072,"procs":148,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 02:50:13.411533  257275 start.go:143] virtualization:  
	I1124 02:50:13.415345  257275 out.go:99] [download-only-739794] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 02:50:13.415609  257275 notify.go:221] Checking for updates...
	I1124 02:50:13.418687  257275 out.go:171] MINIKUBE_LOCATION=21975
	I1124 02:50:13.421828  257275 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 02:50:13.424948  257275 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 02:50:13.427927  257275 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 02:50:13.430998  257275 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1124 02:50:13.436649  257275 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 02:50:13.436985  257275 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 02:50:13.458144  257275 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 02:50:13.458259  257275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:50:13.519076  257275 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-24 02:50:13.510109815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 02:50:13.519186  257275 docker.go:319] overlay module found
	I1124 02:50:13.522114  257275 out.go:99] Using the docker driver based on user configuration
	I1124 02:50:13.522163  257275 start.go:309] selected driver: docker
	I1124 02:50:13.522173  257275 start.go:927] validating driver "docker" against <nil>
	I1124 02:50:13.522296  257275 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 02:50:13.579860  257275 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-24 02:50:13.570674144 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 02:50:13.580014  257275 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 02:50:13.580303  257275 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1124 02:50:13.580540  257275 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 02:50:13.583538  257275 out.go:171] Using Docker driver with root privileges
	I1124 02:50:13.586220  257275 cni.go:84] Creating CNI manager for ""
	I1124 02:50:13.586298  257275 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1124 02:50:13.586313  257275 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 02:50:13.586406  257275 start.go:353] cluster config:
	{Name:download-only-739794 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-739794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 02:50:13.589526  257275 out.go:99] Starting "download-only-739794" primary control-plane node in "download-only-739794" cluster
	I1124 02:50:13.589547  257275 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1124 02:50:13.592352  257275 out.go:99] Pulling base image v0.0.48-1763935653-21975 ...
	I1124 02:50:13.592396  257275 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 02:50:13.592620  257275 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local docker daemon
	I1124 02:50:13.608914  257275 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 to local cache
	I1124 02:50:13.609076  257275 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory
	I1124 02:50:13.609097  257275 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 in local cache directory, skipping pull
	I1124 02:50:13.609102  257275 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 exists in cache, skipping pull
	I1124 02:50:13.609110  257275 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 as a tarball
	I1124 02:50:13.653873  257275 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1124 02:50:13.653902  257275 cache.go:65] Caching tarball of preloaded images
	I1124 02:50:13.654071  257275 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1124 02:50:13.657166  257275 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1124 02:50:13.657209  257275 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1124 02:50:13.747122  257275 preload.go:295] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1124 02:50:13.747190  257275 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21975-255205/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-739794 host does not exist
	  To start a cluster, run: "minikube start -p download-only-739794"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-739794
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 02:50:52.805205  257069 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-747689 --alsologtostderr --binary-mirror http://127.0.0.1:44867 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-747689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-747689
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-335123
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-335123: exit status 85 (82.272568ms)

                                                
                                                
-- stdout --
	* Profile "addons-335123" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-335123"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-335123
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-335123: exit status 85 (82.637224ms)

                                                
                                                
-- stdout --
	* Profile "addons-335123" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-335123"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (174.43s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-335123 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-335123 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m54.427628537s)
--- PASS: TestAddons/Setup (174.43s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.73s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 72.591609ms
addons_test.go:868: volcano-scheduler stabilized in 72.638125ms
addons_test.go:876: volcano-admission stabilized in 72.833474ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-66mhg" [212c4abe-046b-4fac-82ed-656984945c09] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003920606s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-n6zm4" [cc7b877a-7cb4-4fe3-872a-0b4c9273a137] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003117815s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-d69fg" [7271e5bb-9abc-425d-8d83-a4964a96b7c8] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003166473s
addons_test.go:903: (dbg) Run:  kubectl --context addons-335123 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-335123 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-335123 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [dba31793-03ab-406c-9a05-e9bbb2bbddc9] Pending
helpers_test.go:352: "test-job-nginx-0" [dba31793-03ab-406c-9a05-e9bbb2bbddc9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [dba31793-03ab-406c-9a05-e9bbb2bbddc9] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.013901115s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-335123 addons disable volcano --alsologtostderr -v=1: (12.021243947s)
--- PASS: TestAddons/serial/Volcano (39.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-335123 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-335123 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.86s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-335123 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-335123 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b79ff9a3-da61-4b19-95cd-9f90dcf3f66d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b79ff9a3-da61-4b19-95cd-9f90dcf3f66d] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003938967s
addons_test.go:694: (dbg) Run:  kubectl --context addons-335123 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-335123 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-335123 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-335123 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.86s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.36s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.02412ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-rhkz5" [e405f308-7070-4b5f-8597-b3d94d9ff0a4] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006680516s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-h5zpm" [39d276ce-d3b3-486f-a0dc-dec6fb1f475d] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.009591314s
addons_test.go:392: (dbg) Run:  kubectl --context addons-335123 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-335123 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-335123 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.270941092s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 ip
2025/11/24 02:55:02 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.36s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 5.105434ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-335123
addons_test.go:332: (dbg) Run:  kubectl --context addons-335123 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.78s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.8s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-335123 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-335123 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-335123 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [8a56cc06-f0d4-47cd-b809-1c77d9cee4be] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [8a56cc06-f0d4-47cd-b809-1c77d9cee4be] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004263476s
I1124 02:56:19.446143  257069 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-335123 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-335123 addons disable ingress-dns --alsologtostderr -v=1: (1.2832292s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-335123 addons disable ingress --alsologtostderr -v=1: (7.812088577s)
--- PASS: TestAddons/parallel/Ingress (19.80s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vzlft" [83609209-1a59-441a-8523-ac1f9cfa1875] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004435066s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-335123 addons disable inspektor-gadget --alsologtostderr -v=1: (5.809135665s)
--- PASS: TestAddons/parallel/InspektorGadget (11.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.080669ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-scsv7" [2911fe77-dd15-4d44-8457-6f6344870fcd] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003667453s
addons_test.go:463: (dbg) Run:  kubectl --context addons-335123 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.35s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 02:54:57.782803  257069 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 02:54:57.787373  257069 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 02:54:57.787399  257069 kapi.go:107] duration metric: took 7.484705ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 7.496357ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-335123 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-335123 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [1e60ba2a-898e-44ab-8ddc-e6893e58681e] Pending
helpers_test.go:352: "task-pv-pod" [1e60ba2a-898e-44ab-8ddc-e6893e58681e] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [1e60ba2a-898e-44ab-8ddc-e6893e58681e] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003808177s
addons_test.go:572: (dbg) Run:  kubectl --context addons-335123 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-335123 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-335123 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-335123 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-335123 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-335123 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-335123 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [b9651ad3-6369-49f3-a04b-696a1bef29ed] Pending
helpers_test.go:352: "task-pv-pod-restore" [b9651ad3-6369-49f3-a04b-696a1bef29ed] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [b9651ad3-6369-49f3-a04b-696a1bef29ed] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003649385s
addons_test.go:614: (dbg) Run:  kubectl --context addons-335123 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-335123 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-335123 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-335123 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.015751811s)
--- PASS: TestAddons/parallel/CSI (49.35s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-335123 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-6vg8n" [e85786de-0591-4d44-9189-e6d88b6d787e] Pending
helpers_test.go:352: "headlamp-dfcdc64b-6vg8n" [e85786de-0591-4d44-9189-e6d88b6d787e] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-6vg8n" [e85786de-0591-4d44-9189-e6d88b6d787e] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.00314621s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-mprmj" [dd80805c-a941-47cc-b5f5-8cc908b3bffb] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003928333s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.66s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.6s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-335123 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-335123 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-335123 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [413eb164-8a8f-4203-8287-86d6d6e33354] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [413eb164-8a8f-4203-8287-86d6d6e33354] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [413eb164-8a8f-4203-8287-86d6d6e33354] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004162014s
addons_test.go:967: (dbg) Run:  kubectl --context addons-335123 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 ssh "cat /opt/local-path-provisioner/pvc-661f505b-a0d4-4c3e-9d4e-9531bfaee1f4_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-335123 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-335123 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-335123 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.136120425s)
--- PASS: TestAddons/parallel/LocalPath (51.60s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-jh5g8" [16975618-78a8-46aa-acc4-a5d9f081ef68] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004472386s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.56s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.85s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-w2hw8" [6147d172-de76-452f-a3eb-fa1068b1f7c4] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003797963s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-335123 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-335123 addons disable yakd --alsologtostderr -v=1: (5.840768926s)
--- PASS: TestAddons/parallel/Yakd (11.85s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-335123
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-335123: (12.048718122s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-335123
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-335123
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-335123
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (38.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-216763 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-216763 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (35.716558515s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-216763 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-216763 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-216763 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-216763" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-216763
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-216763: (2.12993104s)
--- PASS: TestCertOptions (38.59s)

                                                
                                    
x
+
TestCertExpiration (233.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-846384 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-846384 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.123707252s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-846384 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-846384 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.366999423s)
helpers_test.go:175: Cleaning up "cert-expiration-846384" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-846384
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-846384: (3.275922744s)
--- PASS: TestCertExpiration (233.77s)

                                                
                                    
x
+
TestForceSystemdFlag (36.52s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-090425 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-090425 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (33.563817759s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-090425 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-090425" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-090425
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-090425: (2.624918802s)
--- PASS: TestForceSystemdFlag (36.52s)

                                                
                                    
x
+
TestForceSystemdEnv (43.54s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-574539 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-574539 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.644303826s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-574539 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-574539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-574539
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-574539: (2.511913326s)
--- PASS: TestForceSystemdEnv (43.54s)

                                                
                                    
x
+
TestDockerEnvContainerd (46.44s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-161513 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-161513 --driver=docker  --container-runtime=containerd: (29.85360847s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-161513"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-161513": (1.127606107s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4GTxkV11sdtn/agent.277105" SSH_AGENT_PID="277106" DOCKER_HOST=ssh://docker@127.0.0.1:33138 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4GTxkV11sdtn/agent.277105" SSH_AGENT_PID="277106" DOCKER_HOST=ssh://docker@127.0.0.1:33138 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4GTxkV11sdtn/agent.277105" SSH_AGENT_PID="277106" DOCKER_HOST=ssh://docker@127.0.0.1:33138 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.336401152s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-4GTxkV11sdtn/agent.277105" SSH_AGENT_PID="277106" DOCKER_HOST=ssh://docker@127.0.0.1:33138 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-161513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-161513
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-161513: (2.582029833s)
--- PASS: TestDockerEnvContainerd (46.44s)

                                                
                                    
x
+
TestErrorSpam/setup (32.18s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-131018 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-131018 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-131018 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-131018 --driver=docker  --container-runtime=containerd: (32.178447137s)
--- PASS: TestErrorSpam/setup (32.18s)

                                                
                                    
x
+
TestErrorSpam/start (0.82s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 start --dry-run
--- PASS: TestErrorSpam/start (0.82s)

                                                
                                    
x
+
TestErrorSpam/status (1.2s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 status
--- PASS: TestErrorSpam/status (1.20s)

                                                
                                    
x
+
TestErrorSpam/pause (1.75s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 pause
--- PASS: TestErrorSpam/pause (1.75s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.82s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 unpause
--- PASS: TestErrorSpam/unpause (1.82s)

                                                
                                    
x
+
TestErrorSpam/stop (1.61s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 stop: (1.394551308s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-131018 --log_dir /tmp/nospam-131018 stop
--- PASS: TestErrorSpam/stop (1.61s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21975-255205/.minikube/files/etc/test/nested/copy/257069/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (79.86s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-930282 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1124 02:58:47.952800  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:47.959239  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:47.970698  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:47.992203  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:48.033588  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:48.114964  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:48.276404  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:48.598095  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:49.239503  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:50.520830  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:53.083486  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:58:58.205074  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:59:08.448327  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 02:59:28.929682  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-930282 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m19.862821003s)
--- PASS: TestFunctional/serial/StartWithProxy (79.86s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.1s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 02:59:38.205517  257069 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-930282 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-930282 --alsologtostderr -v=8: (7.098507678s)
functional_test.go:678: soft start took 7.100798645s for "functional-930282" cluster.
I1124 02:59:45.304441  257069 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.10s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.08s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-930282 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 cache add registry.k8s.io/pause:3.1: (1.326531231s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 cache add registry.k8s.io/pause:3.3: (1.128123311s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 cache add registry.k8s.io/pause:latest: (1.003493234s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.46s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-930282 /tmp/TestFunctionalserialCacheCmdcacheadd_local2731662685/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 cache add minikube-local-cache-test:functional-930282
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 cache delete minikube-local-cache-test:functional-930282
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-930282
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.34s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (311.661596ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 kubectl -- --context functional-930282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-930282 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.15s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (50.26s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-930282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 03:00:09.892218  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-930282 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (50.258713304s)
functional_test.go:776: restart took 50.258856428s for "functional-930282" cluster.
I1124 03:00:43.235053  257069 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (50.26s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-930282 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 logs: (1.484467958s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 logs --file /tmp/TestFunctionalserialLogsFileCmd3089958602/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 logs --file /tmp/TestFunctionalserialLogsFileCmd3089958602/001/logs.txt: (1.743753478s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-930282 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-930282
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-930282: exit status 115 (418.227059ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31205 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-930282 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-930282 delete -f testdata/invalidsvc.yaml: (1.089277291s)
--- PASS: TestFunctional/serial/InvalidService (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 config get cpus: exit status 14 (69.540288ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 config get cpus: exit status 14 (77.835234ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-930282 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-930282 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 293733: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.56s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-930282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-930282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (277.409374ms)

                                                
                                                
-- stdout --
	* [functional-930282] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:01:28.689861  293399 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:01:28.690074  293399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:01:28.690102  293399 out.go:374] Setting ErrFile to fd 2...
	I1124 03:01:28.690122  293399 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:01:28.690439  293399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:01:28.690863  293399 out.go:368] Setting JSON to false
	I1124 03:01:28.691931  293399 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6217,"bootTime":1763947072,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:01:28.692033  293399 start.go:143] virtualization:  
	I1124 03:01:28.701517  293399 out.go:179] * [functional-930282] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:01:28.705395  293399 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:01:28.705462  293399 notify.go:221] Checking for updates...
	I1124 03:01:28.711106  293399 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:01:28.714219  293399 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:01:28.717100  293399 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:01:28.719993  293399 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:01:28.722969  293399 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:01:28.726317  293399 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:01:28.727093  293399 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:01:28.774803  293399 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:01:28.774911  293399 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:01:28.884457  293399 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 03:01:28.873590933 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:01:28.884615  293399 docker.go:319] overlay module found
	I1124 03:01:28.887675  293399 out.go:179] * Using the docker driver based on existing profile
	I1124 03:01:28.890576  293399 start.go:309] selected driver: docker
	I1124 03:01:28.890599  293399 start.go:927] validating driver "docker" against &{Name:functional-930282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-930282 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:01:28.890716  293399 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:01:28.894287  293399 out.go:203] 
	W1124 03:01:28.897247  293399 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 03:01:28.900162  293399 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-930282 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-930282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-930282 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (306.915005ms)

                                                
                                                
-- stdout --
	* [functional-930282] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:01:28.412956  293300 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:01:28.413069  293300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:01:28.413076  293300 out.go:374] Setting ErrFile to fd 2...
	I1124 03:01:28.413082  293300 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:01:28.414133  293300 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:01:28.414568  293300 out.go:368] Setting JSON to false
	I1124 03:01:28.415600  293300 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6217,"bootTime":1763947072,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:01:28.415677  293300 start.go:143] virtualization:  
	I1124 03:01:28.419206  293300 out.go:179] * [functional-930282] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1124 03:01:28.423105  293300 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:01:28.423215  293300 notify.go:221] Checking for updates...
	I1124 03:01:28.428662  293300 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:01:28.431641  293300 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:01:28.434502  293300 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:01:28.437551  293300 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:01:28.440524  293300 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:01:28.443900  293300 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:01:28.444805  293300 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:01:28.509137  293300 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:01:28.509246  293300 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:01:28.605895  293300 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-24 03:01:28.59532418 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:01:28.605997  293300 docker.go:319] overlay module found
	I1124 03:01:28.609761  293300 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 03:01:28.612625  293300 start.go:309] selected driver: docker
	I1124 03:01:28.612647  293300 start.go:927] validating driver "docker" against &{Name:functional-930282 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763935653-21975@sha256:5273d148037cfb860f8152fbd08072e6c1f4b37ff9a51956a3c12965f5f2d787 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-930282 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 03:01:28.612767  293300 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:01:28.616226  293300 out.go:203] 
	W1124 03:01:28.619116  293300 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 03:01:28.622205  293300 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-930282 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-930282 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-wn697" [a7d174b8-7b98-4b9d-9b46-71213c4cd874] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-wn697" [a7d174b8-7b98-4b9d-9b46-71213c4cd874] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003430412s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31882
functional_test.go:1680: http://192.168.49.2:31882: success! body:
Request served by hello-node-connect-7d85dfc575-wn697

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31882
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.77s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [72f0f75b-20df-4523-8a9c-6b7435fddcdd] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003614243s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-930282 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-930282 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-930282 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-930282 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [df60243e-4847-4252-ae17-5341a53feb92] Pending
helpers_test.go:352: "sp-pod" [df60243e-4847-4252-ae17-5341a53feb92] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [df60243e-4847-4252-ae17-5341a53feb92] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003450966s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-930282 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-930282 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-930282 delete -f testdata/storage-provisioner/pod.yaml: (1.139410207s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-930282 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [76a0d7d0-907f-4327-914d-05e434769245] Pending
helpers_test.go:352: "sp-pod" [76a0d7d0-907f-4327-914d-05e434769245] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [76a0d7d0-907f-4327-914d-05e434769245] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.00325678s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-930282 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.25s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh -n functional-930282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 cp functional-930282:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd87212137/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh -n functional-930282 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh -n functional-930282 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.18s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/257069/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo cat /etc/test/nested/copy/257069/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/257069.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo cat /etc/ssl/certs/257069.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/257069.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo cat /usr/share/ca-certificates/257069.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/2570692.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo cat /etc/ssl/certs/2570692.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/2570692.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo cat /usr/share/ca-certificates/2570692.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-930282 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 ssh "sudo systemctl is-active docker": exit status 1 (404.242394ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 ssh "sudo systemctl is-active crio": exit status 1 (428.688728ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 version --short
--- PASS: TestFunctional/parallel/Version/short (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 version -o=json --components: (1.479852752s)
--- PASS: TestFunctional/parallel/Version/components (1.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-930282 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-930282
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-930282
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-930282 image ls --format short --alsologtostderr:
I1124 03:01:39.078750  295310 out.go:360] Setting OutFile to fd 1 ...
I1124 03:01:39.078976  295310 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.079006  295310 out.go:374] Setting ErrFile to fd 2...
I1124 03:01:39.079026  295310 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.079329  295310 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
I1124 03:01:39.081347  295310 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.081547  295310 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.082153  295310 cli_runner.go:164] Run: docker container inspect functional-930282 --format={{.State.Status}}
I1124 03:01:39.106710  295310 ssh_runner.go:195] Run: systemctl --version
I1124 03:01:39.106779  295310 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-930282
I1124 03:01:39.139356  295310 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/functional-930282/id_rsa Username:docker}
I1124 03:01:39.243346  295310 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-930282 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-930282  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-930282  │ sha256:dfd446 │ 991B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-930282 image ls --format table --alsologtostderr:
I1124 03:01:39.782751  295519 out.go:360] Setting OutFile to fd 1 ...
I1124 03:01:39.783039  295519 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.783071  295519 out.go:374] Setting ErrFile to fd 2...
I1124 03:01:39.783091  295519 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.783394  295519 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
I1124 03:01:39.784113  295519 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.784300  295519 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.784919  295519 cli_runner.go:164] Run: docker container inspect functional-930282 --format={{.State.Status}}
I1124 03:01:39.810288  295519 ssh_runner.go:195] Run: systemctl --version
I1124 03:01:39.810405  295519 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-930282
I1124 03:01:39.830809  295519 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/functional-930282/id_rsa Username:docker}
I1124 03:01:39.944017  295519 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-930282 image ls --format json --alsologtostderr:
[{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:dfd446ac13fc3bacd4be21002c67bccca51eb2632951570a06fda09749d07030","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-930282"],"size":"991"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d78
5895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kind
est/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":
["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-930282","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":[
"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-930282 image ls --format json --alsologtostderr:
I1124 03:01:39.489055  295446 out.go:360] Setting OutFile to fd 1 ...
I1124 03:01:39.489241  295446 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.489269  295446 out.go:374] Setting ErrFile to fd 2...
I1124 03:01:39.489288  295446 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.489647  295446 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
I1124 03:01:39.490326  295446 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.490512  295446 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.491160  295446 cli_runner.go:164] Run: docker container inspect functional-930282 --format={{.State.Status}}
I1124 03:01:39.529405  295446 ssh_runner.go:195] Run: systemctl --version
I1124 03:01:39.529467  295446 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-930282
I1124 03:01:39.552163  295446 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/functional-930282/id_rsa Username:docker}
I1124 03:01:39.659565  295446 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-930282 image ls --format yaml --alsologtostderr:
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:dfd446ac13fc3bacd4be21002c67bccca51eb2632951570a06fda09749d07030
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-930282
size: "991"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-930282
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-930282 image ls --format yaml --alsologtostderr:
I1124 03:01:39.202207  295363 out.go:360] Setting OutFile to fd 1 ...
I1124 03:01:39.202427  295363 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.202455  295363 out.go:374] Setting ErrFile to fd 2...
I1124 03:01:39.202474  295363 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.202764  295363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
I1124 03:01:39.203453  295363 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.203631  295363 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.204194  295363 cli_runner.go:164] Run: docker container inspect functional-930282 --format={{.State.Status}}
I1124 03:01:39.223156  295363 ssh_runner.go:195] Run: systemctl --version
I1124 03:01:39.223205  295363 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-930282
I1124 03:01:39.247718  295363 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/functional-930282/id_rsa Username:docker}
I1124 03:01:39.357179  295363 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 ssh pgrep buildkitd: exit status 1 (339.297183ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image build -t localhost/my-image:functional-930282 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 image build -t localhost/my-image:functional-930282 testdata/build --alsologtostderr: (3.604553255s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-930282 image build -t localhost/my-image:functional-930282 testdata/build --alsologtostderr:
I1124 03:01:39.686783  295504 out.go:360] Setting OutFile to fd 1 ...
I1124 03:01:39.687621  295504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.687638  295504 out.go:374] Setting ErrFile to fd 2...
I1124 03:01:39.687647  295504 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 03:01:39.688250  295504 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
I1124 03:01:39.689398  295504 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.691062  295504 config.go:182] Loaded profile config "functional-930282": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1124 03:01:39.691845  295504 cli_runner.go:164] Run: docker container inspect functional-930282 --format={{.State.Status}}
I1124 03:01:39.735837  295504 ssh_runner.go:195] Run: systemctl --version
I1124 03:01:39.735913  295504 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-930282
I1124 03:01:39.766176  295504 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/functional-930282/id_rsa Username:docker}
I1124 03:01:39.875442  295504 build_images.go:162] Building image from path: /tmp/build.995259629.tar
I1124 03:01:39.875527  295504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 03:01:39.885755  295504 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.995259629.tar
I1124 03:01:39.889976  295504 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.995259629.tar: stat -c "%s %y" /var/lib/minikube/build/build.995259629.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.995259629.tar': No such file or directory
I1124 03:01:39.890005  295504 ssh_runner.go:362] scp /tmp/build.995259629.tar --> /var/lib/minikube/build/build.995259629.tar (3072 bytes)
I1124 03:01:39.911089  295504 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.995259629
I1124 03:01:39.920115  295504 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.995259629 -xf /var/lib/minikube/build/build.995259629.tar
I1124 03:01:39.929479  295504 containerd.go:394] Building image: /var/lib/minikube/build/build.995259629
I1124 03:01:39.929561  295504 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.995259629 --local dockerfile=/var/lib/minikube/build/build.995259629 --output type=image,name=localhost/my-image:functional-930282
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.6s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:52a7ec0a6092552becea9f3515fd57537d92e4d31028739e02ac3eb4cc7ee085 0.0s done
#8 exporting config sha256:69ed704432b94fbcb82b60508687d402ec6e58267f7e2155bfd78de4547e1cae 0.0s done
#8 naming to localhost/my-image:functional-930282 done
#8 DONE 0.2s
I1124 03:01:43.202174  295504 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.995259629 --local dockerfile=/var/lib/minikube/build/build.995259629 --output type=image,name=localhost/my-image:functional-930282: (3.272580645s)
I1124 03:01:43.202255  295504 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.995259629
I1124 03:01:43.210200  295504 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.995259629.tar
I1124 03:01:43.219405  295504 build_images.go:218] Built localhost/my-image:functional-930282 from /tmp/build.995259629.tar
I1124 03:01:43.219438  295504 build_images.go:134] succeeded building to: functional-930282
I1124 03:01:43.219444  295504 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-930282
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image load --daemon kicbase/echo-server:functional-930282 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 image load --daemon kicbase/echo-server:functional-930282 --alsologtostderr: (1.113676921s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.43s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image load --daemon kicbase/echo-server:functional-930282 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 image load --daemon kicbase/echo-server:functional-930282 --alsologtostderr: (1.039761355s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-930282
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image load --daemon kicbase/echo-server:functional-930282 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-arm64 -p functional-930282 image load --daemon kicbase/echo-server:functional-930282 --alsologtostderr: (1.197033314s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.88s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-930282 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-930282 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-930282 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 290544: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-930282 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-930282 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-930282 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [7c8153c8-359a-4e81-9424-dd5c6a97e7c4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [7c8153c8-359a-4e81-9424-dd5c6a97e7c4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.003338735s
I1124 03:01:08.242744  257069 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image save kicbase/echo-server:functional-930282 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image rm kicbase/echo-server:functional-930282 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-930282
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 image save --daemon kicbase/echo-server:functional-930282 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-930282
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-930282 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.108.31.237 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-930282 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-930282 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-930282 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-zq9p8" [31991a08-2760-45ef-8b79-34072dac4c9e] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-zq9p8" [31991a08-2760-45ef-8b79-34072dac4c9e] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003622328s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 service list -o json
functional_test.go:1504: Took "523.714964ms" to run "out/minikube-linux-arm64 -p functional-930282 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32670
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32670
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "587.232795ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "160.830885ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdany-port2276545209/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763953286183279183" to /tmp/TestFunctionalparallelMountCmdany-port2276545209/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763953286183279183" to /tmp/TestFunctionalparallelMountCmdany-port2276545209/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763953286183279183" to /tmp/TestFunctionalparallelMountCmdany-port2276545209/001/test-1763953286183279183
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (578.828245ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 03:01:26.763150  257069 retry.go:31] will retry after 647.324606ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 03:01 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 03:01 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 03:01 test-1763953286183279183
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh cat /mount-9p/test-1763953286183279183
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-930282 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [bfe91bcb-dd44-488c-aa5b-489bf23dd042] Pending
helpers_test.go:352: "busybox-mount" [bfe91bcb-dd44-488c-aa5b-489bf23dd042] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E1124 03:01:31.813619  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [bfe91bcb-dd44-488c-aa5b-489bf23dd042] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [bfe91bcb-dd44-488c-aa5b-489bf23dd042] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003491044s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-930282 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdany-port2276545209/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.99s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "508.231602ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "64.646072ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdspecific-port3918728033/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (531.233393ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 03:01:34.702611  257069 retry.go:31] will retry after 465.851206ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdspecific-port3918728033/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 ssh "sudo umount -f /mount-9p": exit status 1 (379.336164ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-930282 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdspecific-port3918728033/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
2025/11/24 03:01:36 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2191398551/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2191398551/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2191398551/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T" /mount1: exit status 1 (815.412209ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 03:01:37.314214  257069 retry.go:31] will retry after 540.894308ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-930282 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-930282 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2191398551/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2191398551/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-930282 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2191398551/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.50s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-930282
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-930282
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-930282
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (168.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1124 03:03:47.950655  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:04:15.655536  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m47.544680033s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (168.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (43.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 kubectl -- rollout status deployment/busybox: (5.41749943s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
I1124 03:04:40.949049  257069 retry.go:31] will retry after 842.605982ms: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
I1124 03:04:41.989563  257069 retry.go:31] will retry after 1.355838002s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
I1124 03:04:43.506343  257069 retry.go:31] will retry after 3.132716179s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
I1124 03:04:46.809847  257069 retry.go:31] will retry after 2.901533426s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
I1124 03:04:49.882342  257069 retry.go:31] will retry after 7.263869001s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
I1124 03:04:57.324656  257069 retry.go:31] will retry after 6.314530369s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
I1124 03:05:03.830093  257069 retry.go:31] will retry after 12.52265596s: expected 3 Pod IPs but got 4 (may be temporary), output: "\n-- stdout --\n\t'10.244.3.2 10.244.0.4 10.244.0.5 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-4w5h2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-78jhh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-ftb9s -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-4w5h2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-78jhh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-ftb9s -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-4w5h2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-78jhh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-ftb9s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (43.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-4w5h2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-4w5h2 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-78jhh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-78jhh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-ftb9s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 kubectl -- exec busybox-7b57f96db7-ftb9s -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 node add --alsologtostderr -v 5
E1124 03:05:57.756425  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:05:57.763291  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:05:57.774779  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:05:57.796279  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:05:57.837677  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:05:57.919165  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:05:58.080875  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:05:58.402374  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:05:59.044270  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:06:00.335922  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:06:02.897915  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:06:08.019287  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:06:18.261271  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 node add --alsologtostderr -v 5: (1m0.136058261s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5: (1.063797058s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.20s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-411787 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.10s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.084870577s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 status --output json --alsologtostderr -v 5: (1.092743061s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp testdata/cp-test.txt ha-411787:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1974720604/001/cp-test_ha-411787.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787:/home/docker/cp-test.txt ha-411787-m02:/home/docker/cp-test_ha-411787_ha-411787-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m02 "sudo cat /home/docker/cp-test_ha-411787_ha-411787-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787:/home/docker/cp-test.txt ha-411787-m03:/home/docker/cp-test_ha-411787_ha-411787-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m03 "sudo cat /home/docker/cp-test_ha-411787_ha-411787-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787:/home/docker/cp-test.txt ha-411787-m04:/home/docker/cp-test_ha-411787_ha-411787-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m04 "sudo cat /home/docker/cp-test_ha-411787_ha-411787-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp testdata/cp-test.txt ha-411787-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1974720604/001/cp-test_ha-411787-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m02:/home/docker/cp-test.txt ha-411787:/home/docker/cp-test_ha-411787-m02_ha-411787.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787 "sudo cat /home/docker/cp-test_ha-411787-m02_ha-411787.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m02:/home/docker/cp-test.txt ha-411787-m03:/home/docker/cp-test_ha-411787-m02_ha-411787-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m03 "sudo cat /home/docker/cp-test_ha-411787-m02_ha-411787-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m02:/home/docker/cp-test.txt ha-411787-m04:/home/docker/cp-test_ha-411787-m02_ha-411787-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m04 "sudo cat /home/docker/cp-test_ha-411787-m02_ha-411787-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp testdata/cp-test.txt ha-411787-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1974720604/001/cp-test_ha-411787-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m03:/home/docker/cp-test.txt ha-411787:/home/docker/cp-test_ha-411787-m03_ha-411787.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787 "sudo cat /home/docker/cp-test_ha-411787-m03_ha-411787.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m03:/home/docker/cp-test.txt ha-411787-m02:/home/docker/cp-test_ha-411787-m03_ha-411787-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m02 "sudo cat /home/docker/cp-test_ha-411787-m03_ha-411787-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m03:/home/docker/cp-test.txt ha-411787-m04:/home/docker/cp-test_ha-411787-m03_ha-411787-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m04 "sudo cat /home/docker/cp-test_ha-411787-m03_ha-411787-m04.txt"
E1124 03:06:38.743134  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp testdata/cp-test.txt ha-411787-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1974720604/001/cp-test_ha-411787-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m04:/home/docker/cp-test.txt ha-411787:/home/docker/cp-test_ha-411787-m04_ha-411787.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787 "sudo cat /home/docker/cp-test_ha-411787-m04_ha-411787.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m04:/home/docker/cp-test.txt ha-411787-m02:/home/docker/cp-test_ha-411787-m04_ha-411787-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m02 "sudo cat /home/docker/cp-test_ha-411787-m04_ha-411787-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 cp ha-411787-m04:/home/docker/cp-test.txt ha-411787-m03:/home/docker/cp-test_ha-411787-m04_ha-411787-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 ssh -n ha-411787-m03 "sudo cat /home/docker/cp-test_ha-411787-m04_ha-411787-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (2.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 node stop m02 --alsologtostderr -v 5: (1.461823644s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5: exit status 7 (814.051489ms)

                                                
                                                
-- stdout --
	ha-411787
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-411787-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-411787-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-411787-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:06:45.382357  312051 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:06:45.382531  312051 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:06:45.382547  312051 out.go:374] Setting ErrFile to fd 2...
	I1124 03:06:45.382553  312051 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:06:45.383101  312051 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:06:45.383926  312051 out.go:368] Setting JSON to false
	I1124 03:06:45.383967  312051 mustload.go:66] Loading cluster: ha-411787
	I1124 03:06:45.384642  312051 notify.go:221] Checking for updates...
	I1124 03:06:45.386235  312051 config.go:182] Loaded profile config "ha-411787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:06:45.386275  312051 status.go:174] checking status of ha-411787 ...
	I1124 03:06:45.387569  312051 cli_runner.go:164] Run: docker container inspect ha-411787 --format={{.State.Status}}
	I1124 03:06:45.416544  312051 status.go:371] ha-411787 host status = "Running" (err=<nil>)
	I1124 03:06:45.416572  312051 host.go:66] Checking if "ha-411787" exists ...
	I1124 03:06:45.416898  312051 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-411787
	I1124 03:06:45.449602  312051 host.go:66] Checking if "ha-411787" exists ...
	I1124 03:06:45.450019  312051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:06:45.450094  312051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-411787
	I1124 03:06:45.473271  312051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/ha-411787/id_rsa Username:docker}
	I1124 03:06:45.578429  312051 ssh_runner.go:195] Run: systemctl --version
	I1124 03:06:45.585576  312051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:06:45.599416  312051 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:06:45.661993  312051 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-24 03:06:45.651948793 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:06:45.662584  312051 kubeconfig.go:125] found "ha-411787" server: "https://192.168.49.254:8443"
	I1124 03:06:45.662632  312051 api_server.go:166] Checking apiserver status ...
	I1124 03:06:45.662682  312051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:06:45.676022  312051 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1453/cgroup
	I1124 03:06:45.684739  312051 api_server.go:182] apiserver freezer: "6:freezer:/docker/1ba91d0b32393733349545e219cb2ef307350060db6ac170f5fba37a38b788f9/kubepods/burstable/pod3e7e3f839762b87d75b8c4a3b713e9ef/cd3afc6cd660fb975e1a81c4acb975b3fa99d1c93b3891d584052a2f9d534f7a"
	I1124 03:06:45.684810  312051 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1ba91d0b32393733349545e219cb2ef307350060db6ac170f5fba37a38b788f9/kubepods/burstable/pod3e7e3f839762b87d75b8c4a3b713e9ef/cd3afc6cd660fb975e1a81c4acb975b3fa99d1c93b3891d584052a2f9d534f7a/freezer.state
	I1124 03:06:45.692793  312051 api_server.go:204] freezer state: "THAWED"
	I1124 03:06:45.692822  312051 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 03:06:45.701423  312051 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 03:06:45.701454  312051 status.go:463] ha-411787 apiserver status = Running (err=<nil>)
	I1124 03:06:45.701466  312051 status.go:176] ha-411787 status: &{Name:ha-411787 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:06:45.701483  312051 status.go:174] checking status of ha-411787-m02 ...
	I1124 03:06:45.701820  312051 cli_runner.go:164] Run: docker container inspect ha-411787-m02 --format={{.State.Status}}
	I1124 03:06:45.718689  312051 status.go:371] ha-411787-m02 host status = "Stopped" (err=<nil>)
	I1124 03:06:45.718713  312051 status.go:384] host is not running, skipping remaining checks
	I1124 03:06:45.718721  312051 status.go:176] ha-411787-m02 status: &{Name:ha-411787-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:06:45.718746  312051 status.go:174] checking status of ha-411787-m03 ...
	I1124 03:06:45.719063  312051 cli_runner.go:164] Run: docker container inspect ha-411787-m03 --format={{.State.Status}}
	I1124 03:06:45.736111  312051 status.go:371] ha-411787-m03 host status = "Running" (err=<nil>)
	I1124 03:06:45.736136  312051 host.go:66] Checking if "ha-411787-m03" exists ...
	I1124 03:06:45.736608  312051 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-411787-m03
	I1124 03:06:45.753940  312051 host.go:66] Checking if "ha-411787-m03" exists ...
	I1124 03:06:45.754268  312051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:06:45.754312  312051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-411787-m03
	I1124 03:06:45.771972  312051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/ha-411787-m03/id_rsa Username:docker}
	I1124 03:06:45.874080  312051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:06:45.888118  312051 kubeconfig.go:125] found "ha-411787" server: "https://192.168.49.254:8443"
	I1124 03:06:45.888150  312051 api_server.go:166] Checking apiserver status ...
	I1124 03:06:45.888192  312051 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:06:45.901127  312051 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1405/cgroup
	I1124 03:06:45.909921  312051 api_server.go:182] apiserver freezer: "6:freezer:/docker/92a18ba8168215687b13405d8ccc76b94b5f07ab52910f2192f541ee0fd900c4/kubepods/burstable/pod526a54c565828a18abe13b69c931883d/d823f0f898a4ae9fce67a0f8452f2fa7e3d7d8fd522c26231832cb077e6c2d37"
	I1124 03:06:45.909992  312051 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/92a18ba8168215687b13405d8ccc76b94b5f07ab52910f2192f541ee0fd900c4/kubepods/burstable/pod526a54c565828a18abe13b69c931883d/d823f0f898a4ae9fce67a0f8452f2fa7e3d7d8fd522c26231832cb077e6c2d37/freezer.state
	I1124 03:06:45.917619  312051 api_server.go:204] freezer state: "THAWED"
	I1124 03:06:45.917652  312051 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 03:06:45.925994  312051 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 03:06:45.926028  312051 status.go:463] ha-411787-m03 apiserver status = Running (err=<nil>)
	I1124 03:06:45.926040  312051 status.go:176] ha-411787-m03 status: &{Name:ha-411787-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:06:45.926057  312051 status.go:174] checking status of ha-411787-m04 ...
	I1124 03:06:45.926378  312051 cli_runner.go:164] Run: docker container inspect ha-411787-m04 --format={{.State.Status}}
	I1124 03:06:45.945049  312051 status.go:371] ha-411787-m04 host status = "Running" (err=<nil>)
	I1124 03:06:45.945075  312051 host.go:66] Checking if "ha-411787-m04" exists ...
	I1124 03:06:45.945372  312051 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-411787-m04
	I1124 03:06:45.963412  312051 host.go:66] Checking if "ha-411787-m04" exists ...
	I1124 03:06:45.963750  312051 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:06:45.963800  312051 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-411787-m04
	I1124 03:06:45.982313  312051 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/ha-411787-m04/id_rsa Username:docker}
	I1124 03:06:46.101499  312051 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:06:46.119946  312051 status.go:176] ha-411787-m04 status: &{Name:ha-411787-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (2.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (12.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 node start m02 --alsologtostderr -v 5: (11.035801295s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5: (1.352273826s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (12.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.809605165s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (91.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 stop --alsologtostderr -v 5
E1124 03:07:19.705168  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 stop --alsologtostderr -v 5: (26.884181466s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 start --wait true --alsologtostderr -v 5
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 start --wait true --alsologtostderr -v 5: (1m4.286338174s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (91.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 node delete m03 --alsologtostderr -v 5
E1124 03:08:41.626761  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 node delete m03 --alsologtostderr -v 5: (9.610680119s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (25.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 stop --alsologtostderr -v 5
E1124 03:08:47.951183  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 stop --alsologtostderr -v 5: (25.290720054s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5: exit status 7 (117.115762ms)

                                                
                                                
-- stdout --
	ha-411787
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-411787-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-411787-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:09:09.378874  326334 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:09:09.379009  326334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:09:09.379033  326334 out.go:374] Setting ErrFile to fd 2...
	I1124 03:09:09.379052  326334 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:09:09.379312  326334 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:09:09.379525  326334 out.go:368] Setting JSON to false
	I1124 03:09:09.379573  326334 mustload.go:66] Loading cluster: ha-411787
	I1124 03:09:09.379666  326334 notify.go:221] Checking for updates...
	I1124 03:09:09.380037  326334 config.go:182] Loaded profile config "ha-411787": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:09:09.380058  326334 status.go:174] checking status of ha-411787 ...
	I1124 03:09:09.380954  326334 cli_runner.go:164] Run: docker container inspect ha-411787 --format={{.State.Status}}
	I1124 03:09:09.398965  326334 status.go:371] ha-411787 host status = "Stopped" (err=<nil>)
	I1124 03:09:09.398990  326334 status.go:384] host is not running, skipping remaining checks
	I1124 03:09:09.398997  326334 status.go:176] ha-411787 status: &{Name:ha-411787 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:09:09.399024  326334 status.go:174] checking status of ha-411787-m02 ...
	I1124 03:09:09.399315  326334 cli_runner.go:164] Run: docker container inspect ha-411787-m02 --format={{.State.Status}}
	I1124 03:09:09.427103  326334 status.go:371] ha-411787-m02 host status = "Stopped" (err=<nil>)
	I1124 03:09:09.427129  326334 status.go:384] host is not running, skipping remaining checks
	I1124 03:09:09.427137  326334 status.go:176] ha-411787-m02 status: &{Name:ha-411787-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:09:09.427156  326334 status.go:174] checking status of ha-411787-m04 ...
	I1124 03:09:09.427517  326334 cli_runner.go:164] Run: docker container inspect ha-411787-m04 --format={{.State.Status}}
	I1124 03:09:09.445405  326334 status.go:371] ha-411787-m04 host status = "Stopped" (err=<nil>)
	I1124 03:09:09.445432  326334 status.go:384] host is not running, skipping remaining checks
	I1124 03:09:09.445440  326334 status.go:176] ha-411787-m04 status: &{Name:ha-411787-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (25.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.546945026s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (89.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 node add --control-plane --alsologtostderr -v 5
E1124 03:10:57.757114  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:11:25.468376  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 node add --control-plane --alsologtostderr -v 5: (1m28.603004722s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-411787 status --alsologtostderr -v 5: (1.116961521s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (89.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.090602637s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.09s)

                                                
                                    
x
+
TestJSONOutput/start/Command (78.81s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-504326 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-504326 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m18.801705179s)
--- PASS: TestJSONOutput/start/Command (78.81s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-504326 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-504326 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.09s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-504326 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-504326 --output=json --user=testUser: (6.088006195s)
--- PASS: TestJSONOutput/stop/Command (6.09s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.25s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-319904 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-319904 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.838021ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"b34df33c-5046-476e-aabf-6a148c8e7fb7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-319904] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"b58c294e-1da8-4104-be6d-8bf0f64288dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21975"}}
	{"specversion":"1.0","id":"b03595f3-3a57-434b-8f62-2c2d64659cf1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6d9fa679-f8b5-46ed-9035-21765a2d02b5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig"}}
	{"specversion":"1.0","id":"78efd8d2-0511-4679-b610-1a8afce4bdd8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube"}}
	{"specversion":"1.0","id":"c7b4cfb7-e876-4d0c-9615-18238bc2d7dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"ea8d885c-90d0-4f5e-b7fb-8b5bc6790127","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"94df8a68-3dbd-4276-ae66-f3ea4ae6aa7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-319904" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-319904
--- PASS: TestErrorJSONOutput (0.25s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (39.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-598032 --network=
E1124 03:13:47.951294  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-598032 --network=: (37.449582549s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-598032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-598032
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-598032: (2.299070663s)
--- PASS: TestKicCustomNetwork/create_custom_network (39.77s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.55s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-486153 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-486153 --network=bridge: (34.453765754s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-486153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-486153
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-486153: (2.074638188s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.55s)

                                                
                                    
x
+
TestKicExistingNetwork (37.62s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 03:14:38.363972  257069 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 03:14:38.379820  257069 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 03:14:38.379896  257069 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 03:14:38.379913  257069 cli_runner.go:164] Run: docker network inspect existing-network
W1124 03:14:38.395979  257069 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 03:14:38.396011  257069 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 03:14:38.396026  257069 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 03:14:38.396138  257069 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 03:14:38.417706  257069 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-752aaa40bb3d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:00:20:e4:71:15} reservation:<nil>}
I1124 03:14:38.418022  257069 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018ffcd0}
I1124 03:14:38.418047  257069 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 03:14:38.418100  257069 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 03:14:38.474848  257069 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-874143 --network=existing-network
E1124 03:15:11.017307  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-874143 --network=existing-network: (35.351742173s)
helpers_test.go:175: Cleaning up "existing-network-874143" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-874143
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-874143: (2.121721219s)
I1124 03:15:15.968065  257069 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (37.62s)

                                                
                                    
x
+
TestKicCustomSubnet (35.83s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-924749 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-924749 --subnet=192.168.60.0/24: (33.587793916s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-924749 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-924749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-924749
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-924749: (2.216862407s)
--- PASS: TestKicCustomSubnet (35.83s)

                                                
                                    
x
+
TestKicStaticIP (34.05s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-144836 --static-ip=192.168.200.200
E1124 03:15:57.758337  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-144836 --static-ip=192.168.200.200: (31.620756544s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-144836 ip
helpers_test.go:175: Cleaning up "static-ip-144836" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-144836
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-144836: (2.268764322s)
--- PASS: TestKicStaticIP (34.05s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (75.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-453642 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-453642 --driver=docker  --container-runtime=containerd: (33.902235986s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-456161 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-456161 --driver=docker  --container-runtime=containerd: (35.639989639s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-453642
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-456161
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-456161" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-456161
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-456161: (2.088682621s)
helpers_test.go:175: Cleaning up "first-453642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-453642
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-453642: (2.117079152s)
--- PASS: TestMinikubeProfile (75.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.94s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-431307 --memory=3072 --mount-string /tmp/TestMountStartserial2649677262/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-431307 --memory=3072 --mount-string /tmp/TestMountStartserial2649677262/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.939961513s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.94s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-431307 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.34s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-433375 --memory=3072 --mount-string /tmp/TestMountStartserial2649677262/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-433375 --memory=3072 --mount-string /tmp/TestMountStartserial2649677262/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.341871347s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-433375 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-431307 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-431307 --alsologtostderr -v=5: (1.718228975s)
--- PASS: TestMountStart/serial/DeleteFirst (1.72s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-433375 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-433375
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-433375: (1.300913879s)
--- PASS: TestMountStart/serial/Stop (1.30s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.77s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-433375
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-433375: (6.769940717s)
--- PASS: TestMountStart/serial/RestartStopped (7.77s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-433375 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (108.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-971601 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1124 03:18:47.950901  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-971601 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m47.697238802s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (108.22s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-971601 -- rollout status deployment/busybox: (4.00448887s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-dhhvg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-jf7qh -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-dhhvg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-jf7qh -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-dhhvg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-jf7qh -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.86s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-dhhvg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-dhhvg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-jf7qh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-971601 -- exec busybox-7b57f96db7-jf7qh -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.03s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-971601 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-971601 -v=5 --alsologtostderr: (26.575834996s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-971601 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp testdata/cp-test.txt multinode-971601:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp multinode-971601:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile221296734/001/cp-test_multinode-971601.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp multinode-971601:/home/docker/cp-test.txt multinode-971601-m02:/home/docker/cp-test_multinode-971601_multinode-971601-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m02 "sudo cat /home/docker/cp-test_multinode-971601_multinode-971601-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp multinode-971601:/home/docker/cp-test.txt multinode-971601-m03:/home/docker/cp-test_multinode-971601_multinode-971601-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m03 "sudo cat /home/docker/cp-test_multinode-971601_multinode-971601-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp testdata/cp-test.txt multinode-971601-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp multinode-971601-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile221296734/001/cp-test_multinode-971601-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp multinode-971601-m02:/home/docker/cp-test.txt multinode-971601:/home/docker/cp-test_multinode-971601-m02_multinode-971601.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601 "sudo cat /home/docker/cp-test_multinode-971601-m02_multinode-971601.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp multinode-971601-m02:/home/docker/cp-test.txt multinode-971601-m03:/home/docker/cp-test_multinode-971601-m02_multinode-971601-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m03 "sudo cat /home/docker/cp-test_multinode-971601-m02_multinode-971601-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp testdata/cp-test.txt multinode-971601-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp multinode-971601-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile221296734/001/cp-test_multinode-971601-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp multinode-971601-m03:/home/docker/cp-test.txt multinode-971601:/home/docker/cp-test_multinode-971601-m03_multinode-971601.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601 "sudo cat /home/docker/cp-test_multinode-971601-m03_multinode-971601.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 cp multinode-971601-m03:/home/docker/cp-test.txt multinode-971601-m02:/home/docker/cp-test_multinode-971601-m03_multinode-971601-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 ssh -n multinode-971601-m02 "sudo cat /home/docker/cp-test_multinode-971601-m03_multinode-971601-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.66s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-971601 node stop m03: (1.316446747s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-971601 status: exit status 7 (547.246205ms)

                                                
                                                
-- stdout --
	multinode-971601
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-971601-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-971601-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-971601 status --alsologtostderr: exit status 7 (768.61099ms)

                                                
                                                
-- stdout --
	multinode-971601
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-971601-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-971601-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:20:45.095136  379506 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:20:45.095360  379506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:20:45.095367  379506 out.go:374] Setting ErrFile to fd 2...
	I1124 03:20:45.095373  379506 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:20:45.106117  379506 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:20:45.106445  379506 out.go:368] Setting JSON to false
	I1124 03:20:45.106476  379506 mustload.go:66] Loading cluster: multinode-971601
	I1124 03:20:45.108324  379506 notify.go:221] Checking for updates...
	I1124 03:20:45.109234  379506 config.go:182] Loaded profile config "multinode-971601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:20:45.109261  379506 status.go:174] checking status of multinode-971601 ...
	I1124 03:20:45.110124  379506 cli_runner.go:164] Run: docker container inspect multinode-971601 --format={{.State.Status}}
	I1124 03:20:45.226450  379506 status.go:371] multinode-971601 host status = "Running" (err=<nil>)
	I1124 03:20:45.226477  379506 host.go:66] Checking if "multinode-971601" exists ...
	I1124 03:20:45.226823  379506 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971601
	I1124 03:20:45.271587  379506 host.go:66] Checking if "multinode-971601" exists ...
	I1124 03:20:45.271993  379506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:20:45.272044  379506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971601
	I1124 03:20:45.314459  379506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33273 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/multinode-971601/id_rsa Username:docker}
	I1124 03:20:45.426528  379506 ssh_runner.go:195] Run: systemctl --version
	I1124 03:20:45.433636  379506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:20:45.446978  379506 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:20:45.504421  379506 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-24 03:20:45.49448577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:20:45.505050  379506 kubeconfig.go:125] found "multinode-971601" server: "https://192.168.67.2:8443"
	I1124 03:20:45.505074  379506 api_server.go:166] Checking apiserver status ...
	I1124 03:20:45.505117  379506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 03:20:45.518558  379506 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1377/cgroup
	I1124 03:20:45.527673  379506 api_server.go:182] apiserver freezer: "6:freezer:/docker/f182c7c62c1bc66438f4f72ad812fb56a0e9a9a780bba490c2c25fb0f8249ee7/kubepods/burstable/pod41ca00ef763a0052cfedce9b48e81413/e95461ed747b92a88ef06268850e474e900744a8b8c90c448f56f5098390d026"
	I1124 03:20:45.527754  379506 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f182c7c62c1bc66438f4f72ad812fb56a0e9a9a780bba490c2c25fb0f8249ee7/kubepods/burstable/pod41ca00ef763a0052cfedce9b48e81413/e95461ed747b92a88ef06268850e474e900744a8b8c90c448f56f5098390d026/freezer.state
	I1124 03:20:45.535762  379506 api_server.go:204] freezer state: "THAWED"
	I1124 03:20:45.535792  379506 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 03:20:45.544608  379506 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 03:20:45.544636  379506 status.go:463] multinode-971601 apiserver status = Running (err=<nil>)
	I1124 03:20:45.544647  379506 status.go:176] multinode-971601 status: &{Name:multinode-971601 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:20:45.544663  379506 status.go:174] checking status of multinode-971601-m02 ...
	I1124 03:20:45.544985  379506 cli_runner.go:164] Run: docker container inspect multinode-971601-m02 --format={{.State.Status}}
	I1124 03:20:45.566216  379506 status.go:371] multinode-971601-m02 host status = "Running" (err=<nil>)
	I1124 03:20:45.566245  379506 host.go:66] Checking if "multinode-971601-m02" exists ...
	I1124 03:20:45.566564  379506 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-971601-m02
	I1124 03:20:45.584457  379506 host.go:66] Checking if "multinode-971601-m02" exists ...
	I1124 03:20:45.584842  379506 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 03:20:45.584902  379506 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-971601-m02
	I1124 03:20:45.602053  379506 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21975-255205/.minikube/machines/multinode-971601-m02/id_rsa Username:docker}
	I1124 03:20:45.701639  379506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 03:20:45.714465  379506 status.go:176] multinode-971601-m02 status: &{Name:multinode-971601-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:20:45.714542  379506 status.go:174] checking status of multinode-971601-m03 ...
	I1124 03:20:45.714862  379506 cli_runner.go:164] Run: docker container inspect multinode-971601-m03 --format={{.State.Status}}
	I1124 03:20:45.732870  379506 status.go:371] multinode-971601-m03 host status = "Stopped" (err=<nil>)
	I1124 03:20:45.732895  379506 status.go:384] host is not running, skipping remaining checks
	I1124 03:20:45.732903  379506 status.go:176] multinode-971601-m03 status: &{Name:multinode-971601-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.63s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-971601 node start m03 -v=5 --alsologtostderr: (6.92020555s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.72s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (83.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-971601
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-971601
E1124 03:20:57.756698  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-971601: (25.208578547s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-971601 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-971601 --wait=true -v=5 --alsologtostderr: (58.465394755s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-971601
--- PASS: TestMultiNode/serial/RestartKeepsNodes (83.79s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 node delete m03
E1124 03:22:20.830229  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-971601 node delete m03: (4.969963502s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-971601 stop: (23.884291822s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-971601 status: exit status 7 (89.34874ms)

                                                
                                                
-- stdout --
	multinode-971601
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-971601-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-971601 status --alsologtostderr: exit status 7 (94.785986ms)

                                                
                                                
-- stdout --
	multinode-971601
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-971601-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:22:46.953437  388307 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:22:46.953756  388307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:22:46.953795  388307 out.go:374] Setting ErrFile to fd 2...
	I1124 03:22:46.953816  388307 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:22:46.954135  388307 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:22:46.954386  388307 out.go:368] Setting JSON to false
	I1124 03:22:46.954450  388307 mustload.go:66] Loading cluster: multinode-971601
	I1124 03:22:46.954550  388307 notify.go:221] Checking for updates...
	I1124 03:22:46.954928  388307 config.go:182] Loaded profile config "multinode-971601": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:22:46.954967  388307 status.go:174] checking status of multinode-971601 ...
	I1124 03:22:46.955858  388307 cli_runner.go:164] Run: docker container inspect multinode-971601 --format={{.State.Status}}
	I1124 03:22:46.974083  388307 status.go:371] multinode-971601 host status = "Stopped" (err=<nil>)
	I1124 03:22:46.974105  388307 status.go:384] host is not running, skipping remaining checks
	I1124 03:22:46.974112  388307 status.go:176] multinode-971601 status: &{Name:multinode-971601 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 03:22:46.974141  388307 status.go:174] checking status of multinode-971601-m02 ...
	I1124 03:22:46.974447  388307 cli_runner.go:164] Run: docker container inspect multinode-971601-m02 --format={{.State.Status}}
	I1124 03:22:46.996607  388307 status.go:371] multinode-971601-m02 host status = "Stopped" (err=<nil>)
	I1124 03:22:46.996628  388307 status.go:384] host is not running, skipping remaining checks
	I1124 03:22:46.996634  388307 status.go:176] multinode-971601-m02 status: &{Name:multinode-971601-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.07s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (50.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-971601 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-971601 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (50.185656692s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-971601 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (50.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-971601
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-971601-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-971601-m02 --driver=docker  --container-runtime=containerd: exit status 14 (100.691033ms)

                                                
                                                
-- stdout --
	* [multinode-971601-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-971601-m02' is duplicated with machine name 'multinode-971601-m02' in profile 'multinode-971601'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-971601-m03 --driver=docker  --container-runtime=containerd
E1124 03:23:47.951310  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-971601-m03 --driver=docker  --container-runtime=containerd: (33.85563714s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-971601
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-971601: exit status 80 (331.180119ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-971601 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-971601-m03 already exists in multinode-971601-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-971601-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-971601-m03: (2.123129498s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.46s)

                                                
                                    
x
+
TestPreload (150.95s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-072868 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-072868 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (57.064787103s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-072868 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-072868 image pull gcr.io/k8s-minikube/busybox: (2.148428745s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-072868
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-072868: (5.921033919s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-072868 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1124 03:25:57.761825  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-072868 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m23.414517311s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-072868 image list
helpers_test.go:175: Cleaning up "test-preload-072868" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-072868
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-072868: (2.15638735s)
--- PASS: TestPreload (150.95s)

                                                
                                    
x
+
TestScheduledStopUnix (110.53s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-259147 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-259147 --memory=3072 --driver=docker  --container-runtime=containerd: (33.946698663s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-259147 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:27:23.519543  404194 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:27:23.520102  404194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:27:23.520140  404194 out.go:374] Setting ErrFile to fd 2...
	I1124 03:27:23.520161  404194 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:27:23.520535  404194 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:27:23.520860  404194 out.go:368] Setting JSON to false
	I1124 03:27:23.521029  404194 mustload.go:66] Loading cluster: scheduled-stop-259147
	I1124 03:27:23.521472  404194 config.go:182] Loaded profile config "scheduled-stop-259147": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:27:23.521584  404194 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/config.json ...
	I1124 03:27:23.521806  404194 mustload.go:66] Loading cluster: scheduled-stop-259147
	I1124 03:27:23.521975  404194 config.go:182] Loaded profile config "scheduled-stop-259147": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-259147 -n scheduled-stop-259147
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-259147 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:27:23.985975  404284 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:27:23.986094  404284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:27:23.986103  404284 out.go:374] Setting ErrFile to fd 2...
	I1124 03:27:23.986111  404284 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:27:23.986392  404284 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:27:23.986704  404284 out.go:368] Setting JSON to false
	I1124 03:27:23.986923  404284 daemonize_unix.go:73] killing process 404210 as it is an old scheduled stop
	I1124 03:27:23.990653  404284 mustload.go:66] Loading cluster: scheduled-stop-259147
	I1124 03:27:23.991139  404284 config.go:182] Loaded profile config "scheduled-stop-259147": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:27:23.991229  404284 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/config.json ...
	I1124 03:27:23.991427  404284 mustload.go:66] Loading cluster: scheduled-stop-259147
	I1124 03:27:23.991562  404284 config.go:182] Loaded profile config "scheduled-stop-259147": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 03:27:23.999340  257069 retry.go:31] will retry after 75.936µs: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.000575  257069 retry.go:31] will retry after 171.488µs: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.001473  257069 retry.go:31] will retry after 232.846µs: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.002733  257069 retry.go:31] will retry after 448.166µs: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.003942  257069 retry.go:31] will retry after 517.627µs: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.005275  257069 retry.go:31] will retry after 806.406µs: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.006481  257069 retry.go:31] will retry after 1.437306ms: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.008986  257069 retry.go:31] will retry after 1.532169ms: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.010683  257069 retry.go:31] will retry after 2.478167ms: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.013983  257069 retry.go:31] will retry after 3.898834ms: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.018298  257069 retry.go:31] will retry after 7.959647ms: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.026584  257069 retry.go:31] will retry after 10.705931ms: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.037871  257069 retry.go:31] will retry after 16.682743ms: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.055641  257069 retry.go:31] will retry after 16.85914ms: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
I1124 03:27:24.072885  257069 retry.go:31] will retry after 32.998435ms: open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-259147 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-259147 -n scheduled-stop-259147
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-259147
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-259147 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 03:27:49.905359  404972 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:27:49.905562  404972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:27:49.905591  404972 out.go:374] Setting ErrFile to fd 2...
	I1124 03:27:49.905612  404972 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:27:49.906589  404972 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:27:49.906942  404972 out.go:368] Setting JSON to false
	I1124 03:27:49.907070  404972 mustload.go:66] Loading cluster: scheduled-stop-259147
	I1124 03:27:49.907476  404972 config.go:182] Loaded profile config "scheduled-stop-259147": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:27:49.907607  404972 profile.go:143] Saving config to /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/scheduled-stop-259147/config.json ...
	I1124 03:27:49.907833  404972 mustload.go:66] Loading cluster: scheduled-stop-259147
	I1124 03:27:49.908002  404972 config.go:182] Loaded profile config "scheduled-stop-259147": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-259147
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-259147: exit status 7 (78.249297ms)

                                                
                                                
-- stdout --
	scheduled-stop-259147
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-259147 -n scheduled-stop-259147
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-259147 -n scheduled-stop-259147: exit status 7 (79.786771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-259147" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-259147
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-259147: (4.962396363s)
--- PASS: TestScheduledStopUnix (110.53s)

                                                
                                    
x
+
TestInsufficientStorage (10.31s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-007701 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-007701 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (7.758997889s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3ecef8f0-bc53-43bb-9d77-01b5864bdcce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-007701] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"16598150-d78c-45a2-8138-1ec50f63a17f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21975"}}
	{"specversion":"1.0","id":"f35332be-222a-4ed2-a2f9-4bf696f9b19c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ed53fca1-065f-4f48-a678-ec74701fb591","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig"}}
	{"specversion":"1.0","id":"5fba974e-73d8-44ae-9f2e-ad8bd1d11108","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube"}}
	{"specversion":"1.0","id":"e416a5da-5402-4659-8318-71e74f8db136","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9abaf8df-c893-4cc3-a2e6-7a3133f643a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4f824650-126f-477b-973b-0c1df58db1d2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"3f063771-e384-4f25-bbf4-5153d23d560c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"50047770-901e-45ea-984e-9dc903ff1569","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8cb5361b-b810-4576-b67d-927fcba6304c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"b77bfa9f-b371-4499-b6a9-82b739827cb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-007701\" primary control-plane node in \"insufficient-storage-007701\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"50e9fe50-b75d-4be7-8f1c-7fc6502f90c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763935653-21975 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4cdcdf77-e6cd-4de3-8161-2bc34cecf1b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"76f4fd8c-ca2e-4e2b-9d78-b706aa7a8584","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-007701 --output=json --layout=cluster
E1124 03:28:47.950959  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-007701 --output=json --layout=cluster: exit status 7 (301.257747ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-007701","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-007701","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 03:28:48.088587  406799 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-007701" does not appear in /home/jenkins/minikube-integration/21975-255205/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-007701 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-007701 --output=json --layout=cluster: exit status 7 (294.614592ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-007701","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-007701","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 03:28:48.385139  406864 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-007701" does not appear in /home/jenkins/minikube-integration/21975-255205/kubeconfig
	E1124 03:28:48.395055  406864 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/insufficient-storage-007701/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-007701" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-007701
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-007701: (1.952206775s)
--- PASS: TestInsufficientStorage (10.31s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.33s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.862013879 start -p running-upgrade-624290 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.862013879 start -p running-upgrade-624290 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (33.032726351s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-624290 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-624290 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.800543877s)
helpers_test.go:175: Cleaning up "running-upgrade-624290" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-624290
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-624290: (2.072111506s)
--- PASS: TestRunningBinaryUpgrade (63.33s)

                                                
                                    
x
+
TestKubernetesUpgrade (349.77s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (36.368551946s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-850960
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-850960: (1.441822658s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-850960 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-850960 status --format={{.Host}}: exit status 7 (106.037799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1124 03:30:57.755741  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m53.696112222s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-850960 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (323.867834ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-850960] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-850960
	    minikube start -p kubernetes-upgrade-850960 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8509602 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-850960 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1124 03:35:57.755796  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-850960 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.209819859s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-850960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-850960
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-850960: (2.479033913s)
--- PASS: TestKubernetesUpgrade (349.77s)

                                                
                                    
x
+
TestMissingContainerUpgrade (158.71s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2306997256 start -p missing-upgrade-079609 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2306997256 start -p missing-upgrade-079609 --memory=3072 --driver=docker  --container-runtime=containerd: (1m15.282388376s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-079609
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-079609: (1.455841563s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-079609
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-079609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-079609 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m12.164028955s)
helpers_test.go:175: Cleaning up "missing-upgrade-079609" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-079609
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-079609: (2.030893014s)
--- PASS: TestMissingContainerUpgrade (158.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273941 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-273941 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (92.011543ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-273941] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273941 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273941 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.742236637s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-273941 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.63s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273941 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273941 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (15.094128594s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-273941 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-273941 status -o json: exit status 2 (400.149871ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-273941","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-273941
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-273941: (2.133669696s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.63s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273941 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273941 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (8.122593087s)
--- PASS: TestNoKubernetes/serial/Start (8.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21975-255205/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-273941 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-273941 "sudo systemctl is-active --quiet service kubelet": exit status 1 (367.149965ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-273941
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-273941: (1.408470577s)
--- PASS: TestNoKubernetes/serial/Stop (1.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-273941 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-273941 --driver=docker  --container-runtime=containerd: (8.422488787s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-273941 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-273941 "sudo systemctl is-active --quiet service kubelet": exit status 1 (324.414583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (64.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.407117872 start -p stopped-upgrade-725854 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1124 03:31:51.019792  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.407117872 start -p stopped-upgrade-725854 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.37018023s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.407117872 -p stopped-upgrade-725854 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.407117872 -p stopped-upgrade-725854 stop: (1.265331508s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-725854 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-725854 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (30.53420979s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (64.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.77s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-725854
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-725854: (1.77205154s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.77s)

                                                
                                    
x
+
TestPause/serial/Start (51.33s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-452767 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1124 03:33:47.951182  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-452767 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (51.334438318s)
--- PASS: TestPause/serial/Start (51.33s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-452767 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-452767 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.161471184s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.18s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-452767 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-452767 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-452767 --output=json --layout=cluster: exit status 2 (447.941661ms)

                                                
                                                
-- stdout --
	{"Name":"pause-452767","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-452767","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.45s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.68s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-452767 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.68s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-452767 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.06s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-452767 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-452767 --alsologtostderr -v=5: (3.058304538s)
--- PASS: TestPause/serial/DeletePaused (3.06s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-452767
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-452767: exit status 1 (26.516864ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-452767: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-842431 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-842431 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (294.828969ms)

                                                
                                                
-- stdout --
	* [false-842431] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21975
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 03:35:25.955865  445627 out.go:360] Setting OutFile to fd 1 ...
	I1124 03:35:25.956161  445627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:35:25.956195  445627 out.go:374] Setting ErrFile to fd 2...
	I1124 03:35:25.956216  445627 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 03:35:25.956637  445627 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21975-255205/.minikube/bin
	I1124 03:35:25.957190  445627 out.go:368] Setting JSON to false
	I1124 03:35:25.958310  445627 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":8254,"bootTime":1763947072,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1124 03:35:25.958444  445627 start.go:143] virtualization:  
	I1124 03:35:25.964340  445627 out.go:179] * [false-842431] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1124 03:35:25.967585  445627 out.go:179]   - MINIKUBE_LOCATION=21975
	I1124 03:35:25.967683  445627 notify.go:221] Checking for updates...
	I1124 03:35:25.971599  445627 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 03:35:25.974843  445627 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21975-255205/kubeconfig
	I1124 03:35:25.977838  445627 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21975-255205/.minikube
	I1124 03:35:25.980745  445627 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1124 03:35:25.983677  445627 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 03:35:25.987234  445627 config.go:182] Loaded profile config "kubernetes-upgrade-850960": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1124 03:35:25.987412  445627 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 03:35:26.050287  445627 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1124 03:35:26.050486  445627 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 03:35:26.151857  445627 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-24 03:35:26.141404329 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1124 03:35:26.151968  445627 docker.go:319] overlay module found
	I1124 03:35:26.155334  445627 out.go:179] * Using the docker driver based on user configuration
	I1124 03:35:26.158229  445627 start.go:309] selected driver: docker
	I1124 03:35:26.158252  445627 start.go:927] validating driver "docker" against <nil>
	I1124 03:35:26.158265  445627 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 03:35:26.161744  445627 out.go:203] 
	W1124 03:35:26.165082  445627 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1124 03:35:26.168080  445627 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-842431 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-842431" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:31:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-850960
contexts:
- context:
cluster: kubernetes-upgrade-850960
user: kubernetes-upgrade-850960
name: kubernetes-upgrade-850960
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-850960
user:
client-certificate: /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/kubernetes-upgrade-850960/client.crt
client-key: /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/kubernetes-upgrade-850960/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-842431

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-842431"

                                                
                                                
----------------------- debugLogs end: false-842431 [took: 4.355237202s] --------------------------------
helpers_test.go:175: Cleaning up "false-842431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-842431
--- PASS: TestNetworkPlugins/group/false (4.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (63.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m3.975135289s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (63.98s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-098965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-098965 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.067514918s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-098965 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-098965 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-098965 --alsologtostderr -v=3: (12.096405892s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-098965 -n old-k8s-version-098965
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-098965 -n old-k8s-version-098965: exit status 7 (73.589833ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-098965 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1124 03:38:47.950765  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:39:00.831536  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-098965 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (50.435152662s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-098965 -n old-k8s-version-098965
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.85s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-88c6m" [5b345c0a-c4eb-4efe-a181-c9f83421fafe] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003603474s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-88c6m" [5b345c0a-c4eb-4efe-a181-c9f83421fafe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00347889s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-098965 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-098965 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-098965 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098965 -n old-k8s-version-098965
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098965 -n old-k8s-version-098965: exit status 2 (334.507564ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-098965 -n old-k8s-version-098965
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-098965 -n old-k8s-version-098965: exit status 2 (352.948342ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-098965 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-098965 -n old-k8s-version-098965
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-098965 -n old-k8s-version-098965
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (67.84s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-262280 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-262280 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m7.842085818s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (67.84s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (58.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (58.162697308s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (58.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-262280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-262280 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.012694184s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-262280 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-262280 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-262280 --alsologtostderr -v=3: (12.202912556s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-818836 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-818836 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-818836 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-818836 --alsologtostderr -v=3: (12.2410226s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-262280 -n no-preload-262280
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-262280 -n no-preload-262280: exit status 7 (75.534176ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-262280 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (54.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-262280 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-262280 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (54.432325549s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-262280 -n no-preload-262280
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (54.83s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-818836 -n embed-certs-818836
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-818836 -n embed-certs-818836: exit status 7 (157.415024ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-818836 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.75s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-818836 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (54.360009131s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-818836 -n embed-certs-818836
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fzsfm" [dabd75f2-2c50-4dee-a736-34ee48f55ddb] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003641981s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-fzsfm" [dabd75f2-2c50-4dee-a736-34ee48f55ddb] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003149309s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-262280 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9mr5t" [715ee444-4649-4da4-8d24-e8f254edeafe] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003847512s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-262280 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-262280 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-262280 -n no-preload-262280
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-262280 -n no-preload-262280: exit status 2 (347.273305ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-262280 -n no-preload-262280
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-262280 -n no-preload-262280: exit status 2 (330.625747ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-262280 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-262280 -n no-preload-262280
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-262280 -n no-preload-262280
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9mr5t" [715ee444-4649-4da4-8d24-e8f254edeafe] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004597516s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-818836 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-774072 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-774072 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m28.26967379s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (88.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-818836 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.97s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-818836 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-818836 -n embed-certs-818836
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-818836 -n embed-certs-818836: exit status 2 (413.935027ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-818836 -n embed-certs-818836
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-818836 -n embed-certs-818836: exit status 2 (418.374433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-818836 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-818836 -n embed-certs-818836
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-818836 -n embed-certs-818836
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-934324 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1124 03:43:03.336603  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:03.342919  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:03.354198  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:03.375697  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:03.417457  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:03.498793  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:03.660086  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:03.982161  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:04.623715  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:05.905806  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:08.467096  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:13.588884  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-934324 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (39.677666377s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-934324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-934324 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.011450931s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-934324 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-934324 --alsologtostderr -v=3: (1.371273777s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-934324 -n newest-cni-934324
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-934324 -n newest-cni-934324: exit status 7 (84.863411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-934324 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-934324 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1124 03:43:23.830416  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-934324 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (15.348826287s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-934324 -n newest-cni-934324
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-934324 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-934324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-934324 -n newest-cni-934324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-934324 -n newest-cni-934324: exit status 2 (344.618718ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-934324 -n newest-cni-934324
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-934324 -n newest-cni-934324: exit status 2 (349.620762ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-934324 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-934324 -n newest-cni-934324
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-934324 -n newest-cni-934324
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (83.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1124 03:43:44.312844  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:43:47.951169  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m23.959525111s)
--- PASS: TestNetworkPlugins/group/auto/Start (83.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-774072 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-774072 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.188519462s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-774072 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.33s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-774072 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-774072 --alsologtostderr -v=3: (12.54656123s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072: exit status 7 (86.862831ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-774072 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-774072 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1124 03:44:25.274770  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-774072 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.3139043s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-842431 "pgrep -a kubelet"
I1124 03:45:02.921656  257069 config.go:182] Loaded profile config "auto-842431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-842431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2fhvl" [18fe222b-e491-4a10-a6bf-f0857f12fd0e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2fhvl" [18fe222b-e491-4a10-a6bf-f0857f12fd0e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.003054844s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-842431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z2hk4" [069ffc17-9e96-4564-9770-c56db5c5ba91] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004011467s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-z2hk4" [069ffc17-9e96-4564-9770-c56db5c5ba91] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003587249s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-774072 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-774072 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-774072 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072: exit status 2 (480.865951ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072: exit status 2 (412.97576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-774072 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-774072 -n default-k8s-diff-port-774072
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.37s)
E1124 03:50:44.171286  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:45.938648  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (87.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m27.100387483s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (87.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (63.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1124 03:45:45.939066  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:45.946270  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:45.960594  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:45.981952  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:46.023449  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:46.105560  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:46.267807  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:46.589664  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:47.196563  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:47.231220  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:48.512610  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:51.074652  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:56.196870  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:45:57.756137  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:46:06.438156  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:46:26.919829  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m3.640988259s)
--- PASS: TestNetworkPlugins/group/calico/Start (63.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-gbtpf" [a5608348-4cb5-4e0d-8637-23e170bddb5a] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-gbtpf" [a5608348-4cb5-4e0d-8637-23e170bddb5a] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004671107s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-842431 "pgrep -a kubelet"
I1124 03:46:48.394423  257069 config.go:182] Loaded profile config "calico-842431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-842431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pwb9b" [de2cc96b-1a38-45f3-859a-5b5400a07cd7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-pwb9b" [de2cc96b-1a38-45f3-859a-5b5400a07cd7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004768037s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-842431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-msgg8" [51890920-57ca-45c9-9d9e-630f3965cf2f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004812385s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-842431 "pgrep -a kubelet"
E1124 03:47:07.881816  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1124 03:47:08.336911  257069 config.go:182] Loaded profile config "kindnet-842431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-842431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nkglt" [ab1660b1-4f71-45ac-9d2a-619749d0f1d0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nkglt" [ab1660b1-4f71-45ac-9d2a-619749d0f1d0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005303441s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-842431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m2.455437308s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (75.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1124 03:48:03.336359  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m15.663770612s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (75.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-842431 "pgrep -a kubelet"
I1124 03:48:25.759855  257069 config.go:182] Loaded profile config "custom-flannel-842431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-842431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9bftx" [4e82b115-ca71-4114-b95e-80ae9b237c86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9bftx" [4e82b115-ca71-4114-b95e-80ae9b237c86] Running
E1124 03:48:29.804129  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/no-preload-262280/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:48:31.022137  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/addons-335123/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:48:31.038711  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/old-k8s-version-098965/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.00323863s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-842431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1124 03:48:56.911691  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/default-k8s-diff-port-774072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:48:59.473420  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/default-k8s-diff-port-774072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m5.634354171s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-842431 "pgrep -a kubelet"
I1124 03:49:00.450735  257069 config.go:182] Loaded profile config "enable-default-cni-842431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-842431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bn2f7" [9b7e615f-d835-4173-a1df-ed4f06c84f6f] Pending
helpers_test.go:352: "netcat-cd4db9dbf-bn2f7" [9b7e615f-d835-4173-a1df-ed4f06c84f6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 03:49:04.594976  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/default-k8s-diff-port-774072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-bn2f7" [9b7e615f-d835-4173-a1df-ed4f06c84f6f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004871083s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-842431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (72.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-842431 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m12.872104507s)
--- PASS: TestNetworkPlugins/group/bridge/Start (72.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-gfjxh" [1315e9cf-dc56-4a1c-812b-75a542d8b53e] Running
E1124 03:50:03.197209  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:03.203500  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:03.214824  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:03.236141  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:03.277470  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:03.359016  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:03.520559  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:03.842397  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:04.483681  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:05.765002  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003530049s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-842431 "pgrep -a kubelet"
E1124 03:50:08.326893  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1124 03:50:08.524400  257069 config.go:182] Loaded profile config "flannel-842431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-842431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-f46tt" [ad01406b-39bb-41d7-adab-8f819616fa9c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-f46tt" [ad01406b-39bb-41d7-adab-8f819616fa9c] Running
E1124 03:50:13.448491  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/auto-842431/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 03:50:16.280711  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/default-k8s-diff-port-774072/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003460901s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-842431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-842431 "pgrep -a kubelet"
I1124 03:50:49.229921  257069 config.go:182] Loaded profile config "bridge-842431": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-842431 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w6mp8" [6e573af7-c732-4d48-9dce-79b8c73fa95c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w6mp8" [6e573af7-c732-4d48-9dce-79b8c73fa95c] Running
E1124 03:50:57.756490  257069 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/functional-930282/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003170156s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-842431 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-842431 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.45s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-405197 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-405197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-405197
--- SKIP: TestDownloadOnlyKic (0.45s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-973998" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-973998
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-842431 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-842431" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:31:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-850960
contexts:
- context:
cluster: kubernetes-upgrade-850960
user: kubernetes-upgrade-850960
name: kubernetes-upgrade-850960
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-850960
user:
client-certificate: /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/kubernetes-upgrade-850960/client.crt
client-key: /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/kubernetes-upgrade-850960/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-842431

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-842431"

                                                
                                                
----------------------- debugLogs end: kubenet-842431 [took: 4.235815574s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-842431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-842431
--- SKIP: TestNetworkPlugins/group/kubenet (4.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-842431 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-842431" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21975-255205/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 03:31:04 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-850960
contexts:
- context:
cluster: kubernetes-upgrade-850960
user: kubernetes-upgrade-850960
name: kubernetes-upgrade-850960
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-850960
user:
client-certificate: /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/kubernetes-upgrade-850960/client.crt
client-key: /home/jenkins/minikube-integration/21975-255205/.minikube/profiles/kubernetes-upgrade-850960/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-842431

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-842431" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-842431"

                                                
                                                
----------------------- debugLogs end: cilium-842431 [took: 6.024910939s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-842431" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-842431
--- SKIP: TestNetworkPlugins/group/cilium (6.25s)

                                                
                                    
Copied to clipboard