Test Report: Docker_Linux_containerd_arm64 21968

                    
                      c47dc458d63a230593369798adacaa3ab200078c:2025-11-23:42467
                    
                

Test fail (4/333)

Order failed test Duration
301 TestStartStop/group/old-k8s-version/serial/DeployApp 13.57
314 TestStartStop/group/no-preload/serial/DeployApp 13.8
319 TestStartStop/group/embed-certs/serial/DeployApp 14.62
341 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 15.71
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-162750 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2dd7549c-5bf6-4864-9a27-188c6854aedd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2dd7549c-5bf6-4864-9a27-188c6854aedd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003195095s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-162750 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-162750
helpers_test.go:243: (dbg) docker inspect old-k8s-version-162750:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f",
	        "Created": "2025-11-23T10:54:51.953481943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1784384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:54:52.022835067Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f/hosts",
	        "LogPath": "/var/lib/docker/containers/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f-json.log",
	        "Name": "/old-k8s-version-162750",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-162750:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-162750",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f",
	                "LowerDir": "/var/lib/docker/overlay2/c63ab6dc690b90f0078b1181c7b2482e6dae576e4a4a9931a0cf9180a42049dc-init/diff:/var/lib/docker/overlay2/fe0bef51c968206096993e9a75db2143cd9cd74d56696a257291ce63f851a2d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c63ab6dc690b90f0078b1181c7b2482e6dae576e4a4a9931a0cf9180a42049dc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c63ab6dc690b90f0078b1181c7b2482e6dae576e4a4a9931a0cf9180a42049dc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c63ab6dc690b90f0078b1181c7b2482e6dae576e4a4a9931a0cf9180a42049dc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-162750",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-162750/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-162750",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-162750",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-162750",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1370920c28612b4b9ebc39eabef858b90e52f7c6a4afe5df6f209380389afe4b",
	            "SandboxKey": "/var/run/docker/netns/1370920c2861",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35254"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35255"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35258"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35256"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35257"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-162750": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:aa:61:1e:67:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2204e151d724b9aa254e197ae8f573fd169c40786f9413d1d5be71fa8ea2a8bd",
	                    "EndpointID": "36252d81aa94c0d5a35dc4c0eb261a48fafa705636f6aff8cab40a27543c011e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-162750",
	                        "3b748cbca934"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162750 -n old-k8s-version-162750
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-162750 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-162750 logs -n 25: (1.220449743s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-378762 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo containerd config dump                                                                                                                                                                                                        │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo crio config                                                                                                                                                                                                                   │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ delete  │ -p cilium-378762                                                                                                                                                                                                                                    │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ start   │ -p force-systemd-env-479166 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ delete  │ -p kubernetes-upgrade-871841                                                                                                                                                                                                                        │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-679101    │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ force-systemd-env-479166 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p force-systemd-env-479166                                                                                                                                                                                                                         │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p cert-options-501705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ cert-options-501705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ -p cert-options-501705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p cert-options-501705                                                                                                                                                                                                                              │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:55 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:54:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:54:45.778702 1783997 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:54:45.779263 1783997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:54:45.779278 1783997 out.go:374] Setting ErrFile to fd 2...
	I1123 10:54:45.779293 1783997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:54:45.779694 1783997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:54:45.780318 1783997 out.go:368] Setting JSON to false
	I1123 10:54:45.781475 1783997 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":41831,"bootTime":1763853455,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:54:45.781584 1783997 start.go:143] virtualization:  
	I1123 10:54:45.785225 1783997 out.go:179] * [old-k8s-version-162750] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:54:45.789850 1783997 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:54:45.789926 1783997 notify.go:221] Checking for updates...
	I1123 10:54:45.796654 1783997 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:54:45.799993 1783997 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:54:45.803378 1783997 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:54:45.806538 1783997 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:54:45.809836 1783997 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:54:45.813538 1783997 config.go:182] Loaded profile config "cert-expiration-679101": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:54:45.813697 1783997 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:54:45.861804 1783997 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:54:45.861945 1783997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:54:45.925616 1783997 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:54:45.915981674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:54:45.925768 1783997 docker.go:319] overlay module found
	I1123 10:54:45.930988 1783997 out.go:179] * Using the docker driver based on user configuration
	I1123 10:54:45.934033 1783997 start.go:309] selected driver: docker
	I1123 10:54:45.934059 1783997 start.go:927] validating driver "docker" against <nil>
	I1123 10:54:45.934073 1783997 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:54:45.934843 1783997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:54:45.994571 1783997 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:54:45.985733383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:54:45.994732 1783997 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:54:45.995011 1783997 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:54:45.997946 1783997 out.go:179] * Using Docker driver with root privileges
	I1123 10:54:46.001501 1783997 cni.go:84] Creating CNI manager for ""
	I1123 10:54:46.001616 1783997 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:54:46.001629 1783997 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:54:46.001728 1783997 start.go:353] cluster config:
	{Name:old-k8s-version-162750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:54:46.005335 1783997 out.go:179] * Starting "old-k8s-version-162750" primary control-plane node in "old-k8s-version-162750" cluster
	I1123 10:54:46.008262 1783997 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 10:54:46.011249 1783997 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:54:46.014166 1783997 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 10:54:46.014230 1783997 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 10:54:46.014254 1783997 cache.go:65] Caching tarball of preloaded images
	I1123 10:54:46.014261 1783997 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:54:46.014340 1783997 preload.go:238] Found /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 10:54:46.014351 1783997 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 10:54:46.014459 1783997 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/config.json ...
	I1123 10:54:46.014476 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/config.json: {Name:mk5eef821183a362255c44f8410d633523a499ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:46.034721 1783997 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:54:46.034749 1783997 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:54:46.034782 1783997 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:54:46.034830 1783997 start.go:360] acquireMachinesLock for old-k8s-version-162750: {Name:mk0f3804e6ccc6cb84c4dea8eb218364814cd6db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:54:46.034953 1783997 start.go:364] duration metric: took 100.469µs to acquireMachinesLock for "old-k8s-version-162750"
	I1123 10:54:46.034987 1783997 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-162750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:54:46.035063 1783997 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:54:46.040311 1783997 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:54:46.040562 1783997 start.go:159] libmachine.API.Create for "old-k8s-version-162750" (driver="docker")
	I1123 10:54:46.040603 1783997 client.go:173] LocalClient.Create starting
	I1123 10:54:46.040681 1783997 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem
	I1123 10:54:46.040726 1783997 main.go:143] libmachine: Decoding PEM data...
	I1123 10:54:46.040752 1783997 main.go:143] libmachine: Parsing certificate...
	I1123 10:54:46.040826 1783997 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem
	I1123 10:54:46.040852 1783997 main.go:143] libmachine: Decoding PEM data...
	I1123 10:54:46.040869 1783997 main.go:143] libmachine: Parsing certificate...
	I1123 10:54:46.041257 1783997 cli_runner.go:164] Run: docker network inspect old-k8s-version-162750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:54:46.058141 1783997 cli_runner.go:211] docker network inspect old-k8s-version-162750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:54:46.058237 1783997 network_create.go:284] running [docker network inspect old-k8s-version-162750] to gather additional debugging logs...
	I1123 10:54:46.058260 1783997 cli_runner.go:164] Run: docker network inspect old-k8s-version-162750
	W1123 10:54:46.075827 1783997 cli_runner.go:211] docker network inspect old-k8s-version-162750 returned with exit code 1
	I1123 10:54:46.075860 1783997 network_create.go:287] error running [docker network inspect old-k8s-version-162750]: docker network inspect old-k8s-version-162750: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-162750 not found
	I1123 10:54:46.075874 1783997 network_create.go:289] output of [docker network inspect old-k8s-version-162750]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-162750 not found
	
	** /stderr **
	I1123 10:54:46.075987 1783997 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:54:46.092827 1783997 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e44f782e1ead IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ae:ef:b1:2b:de} reservation:<nil>}
	I1123 10:54:46.093109 1783997 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d795300f262d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:f7:c2:f9:ad:5b} reservation:<nil>}
	I1123 10:54:46.093426 1783997 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4b6f246690b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:41:9a:79:92:5d} reservation:<nil>}
	I1123 10:54:46.093747 1783997 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1baa3e8d750 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:db:69:d0:2a:57} reservation:<nil>}
	I1123 10:54:46.094196 1783997 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f7cf0}
	I1123 10:54:46.094224 1783997 network_create.go:124] attempt to create docker network old-k8s-version-162750 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 10:54:46.094288 1783997 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-162750 old-k8s-version-162750
	I1123 10:54:46.153890 1783997 network_create.go:108] docker network old-k8s-version-162750 192.168.85.0/24 created
	I1123 10:54:46.153925 1783997 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-162750" container
	I1123 10:54:46.153999 1783997 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:54:46.169956 1783997 cli_runner.go:164] Run: docker volume create old-k8s-version-162750 --label name.minikube.sigs.k8s.io=old-k8s-version-162750 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:54:46.186824 1783997 oci.go:103] Successfully created a docker volume old-k8s-version-162750
	I1123 10:54:46.186944 1783997 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-162750-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-162750 --entrypoint /usr/bin/test -v old-k8s-version-162750:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:54:46.708368 1783997 oci.go:107] Successfully prepared a docker volume old-k8s-version-162750
	I1123 10:54:46.708439 1783997 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 10:54:46.708453 1783997 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:54:46.708536 1783997 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-162750:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:54:51.880252 1783997 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-162750:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.171646405s)
	I1123 10:54:51.880292 1783997 kic.go:203] duration metric: took 5.171834502s to extract preloaded images to volume ...
	W1123 10:54:51.880429 1783997 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:54:51.880567 1783997 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:54:51.937741 1783997 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-162750 --name old-k8s-version-162750 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-162750 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-162750 --network old-k8s-version-162750 --ip 192.168.85.2 --volume old-k8s-version-162750:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:54:52.249456 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Running}}
	I1123 10:54:52.270863 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:54:52.301164 1783997 cli_runner.go:164] Run: docker exec old-k8s-version-162750 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:54:52.361371 1783997 oci.go:144] the created container "old-k8s-version-162750" has a running status.
	I1123 10:54:52.361399 1783997 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa...
	I1123 10:54:53.193748 1783997 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:54:53.218957 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:54:53.240299 1783997 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:54:53.240318 1783997 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-162750 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:54:53.285222 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:54:53.318410 1783997 machine.go:94] provisionDockerMachine start ...
	I1123 10:54:53.318519 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:53.342415 1783997 main.go:143] libmachine: Using SSH client type: native
	I1123 10:54:53.342758 1783997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35254 <nil> <nil>}
	I1123 10:54:53.342768 1783997 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:54:53.506886 1783997 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-162750
	
	I1123 10:54:53.506951 1783997 ubuntu.go:182] provisioning hostname "old-k8s-version-162750"
	I1123 10:54:53.507054 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:53.526225 1783997 main.go:143] libmachine: Using SSH client type: native
	I1123 10:54:53.526575 1783997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35254 <nil> <nil>}
	I1123 10:54:53.526587 1783997 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-162750 && echo "old-k8s-version-162750" | sudo tee /etc/hostname
	I1123 10:54:53.698430 1783997 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-162750
	
	I1123 10:54:53.698512 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:53.716977 1783997 main.go:143] libmachine: Using SSH client type: native
	I1123 10:54:53.717284 1783997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35254 <nil> <nil>}
	I1123 10:54:53.717301 1783997 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-162750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-162750/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-162750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:54:53.867695 1783997 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:54:53.867736 1783997 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-1582671/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-1582671/.minikube}
	I1123 10:54:53.867756 1783997 ubuntu.go:190] setting up certificates
	I1123 10:54:53.867764 1783997 provision.go:84] configureAuth start
	I1123 10:54:53.867825 1783997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-162750
	I1123 10:54:53.885136 1783997 provision.go:143] copyHostCerts
	I1123 10:54:53.885204 1783997 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem, removing ...
	I1123 10:54:53.885214 1783997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem
	I1123 10:54:53.885304 1783997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem (1078 bytes)
	I1123 10:54:53.885408 1783997 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem, removing ...
	I1123 10:54:53.885414 1783997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem
	I1123 10:54:53.885444 1783997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem (1123 bytes)
	I1123 10:54:53.885493 1783997 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem, removing ...
	I1123 10:54:53.885497 1783997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem
	I1123 10:54:53.885520 1783997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem (1675 bytes)
	I1123 10:54:53.885565 1783997 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-162750 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-162750]
	I1123 10:54:54.011017 1783997 provision.go:177] copyRemoteCerts
	I1123 10:54:54.011141 1783997 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:54:54.011241 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.029286 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.139377 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:54:54.158012 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 10:54:54.175734 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:54:54.193130 1783997 provision.go:87] duration metric: took 325.351961ms to configureAuth
	I1123 10:54:54.193154 1783997 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:54:54.193341 1783997 config.go:182] Loaded profile config "old-k8s-version-162750": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 10:54:54.193348 1783997 machine.go:97] duration metric: took 874.916799ms to provisionDockerMachine
	I1123 10:54:54.193354 1783997 client.go:176] duration metric: took 8.152740764s to LocalClient.Create
	I1123 10:54:54.193376 1783997 start.go:167] duration metric: took 8.152816487s to libmachine.API.Create "old-k8s-version-162750"
	I1123 10:54:54.193383 1783997 start.go:293] postStartSetup for "old-k8s-version-162750" (driver="docker")
	I1123 10:54:54.193391 1783997 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:54:54.193437 1783997 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:54:54.193482 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.210021 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.325275 1783997 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:54:54.329024 1783997 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:54:54.329049 1783997 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:54:54.329061 1783997 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/addons for local assets ...
	I1123 10:54:54.329120 1783997 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/files for local assets ...
	I1123 10:54:54.329201 1783997 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem -> 15845322.pem in /etc/ssl/certs
	I1123 10:54:54.329310 1783997 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:54:54.338040 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:54:54.358007 1783997 start.go:296] duration metric: took 164.608782ms for postStartSetup
	I1123 10:54:54.358447 1783997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-162750
	I1123 10:54:54.375573 1783997 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/config.json ...
	I1123 10:54:54.375883 1783997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:54:54.375946 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.393024 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.496581 1783997 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:54:54.501414 1783997 start.go:128] duration metric: took 8.466335269s to createHost
	I1123 10:54:54.501446 1783997 start.go:83] releasing machines lock for "old-k8s-version-162750", held for 8.466480463s
	I1123 10:54:54.501515 1783997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-162750
	I1123 10:54:54.520198 1783997 ssh_runner.go:195] Run: cat /version.json
	I1123 10:54:54.520243 1783997 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:54:54.520249 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.520311 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.551362 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.553162 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.747658 1783997 ssh_runner.go:195] Run: systemctl --version
	I1123 10:54:54.754098 1783997 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:54:54.761562 1783997 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:54:54.761662 1783997 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:54:54.788880 1783997 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:54:54.788906 1783997 start.go:496] detecting cgroup driver to use...
	I1123 10:54:54.788940 1783997 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:54:54.789013 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 10:54:54.803874 1783997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 10:54:54.816732 1783997 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:54:54.816825 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:54:54.834220 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:54:54.853351 1783997 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:54:54.974263 1783997 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:54:55.105120 1783997 docker.go:234] disabling docker service ...
	I1123 10:54:55.105245 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:54:55.130111 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:54:55.145070 1783997 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:54:55.267040 1783997 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:54:55.418898 1783997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:54:55.433109 1783997 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:54:55.450908 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1123 10:54:55.460521 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 10:54:55.470466 1783997 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 10:54:55.470581 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 10:54:55.480234 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:54:55.489346 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 10:54:55.498570 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:54:55.508154 1783997 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:54:55.516432 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 10:54:55.526011 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 10:54:55.540221 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 10:54:55.550647 1783997 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:54:55.558569 1783997 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:54:55.566399 1783997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:54:55.688971 1783997 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 10:54:55.823826 1783997 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 10:54:55.823941 1783997 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 10:54:55.827847 1783997 start.go:564] Will wait 60s for crictl version
	I1123 10:54:55.827950 1783997 ssh_runner.go:195] Run: which crictl
	I1123 10:54:55.831640 1783997 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:54:55.858457 1783997 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 10:54:55.858567 1783997 ssh_runner.go:195] Run: containerd --version
	I1123 10:54:55.881109 1783997 ssh_runner.go:195] Run: containerd --version
	I1123 10:54:55.905016 1783997 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1123 10:54:55.907979 1783997 cli_runner.go:164] Run: docker network inspect old-k8s-version-162750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:54:55.924445 1783997 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:54:55.928343 1783997 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:54:55.938405 1783997 kubeadm.go:884] updating cluster {Name:old-k8s-version-162750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:54:55.938526 1783997 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 10:54:55.938599 1783997 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:54:55.964746 1783997 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:54:55.964769 1783997 containerd.go:534] Images already preloaded, skipping extraction
	I1123 10:54:55.964834 1783997 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:54:55.992799 1783997 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:54:55.992822 1783997 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:54:55.992831 1783997 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1123 10:54:55.992924 1783997 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-162750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:54:55.992985 1783997 ssh_runner.go:195] Run: sudo crictl info
	I1123 10:54:56.022425 1783997 cni.go:84] Creating CNI manager for ""
	I1123 10:54:56.022448 1783997 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:54:56.022463 1783997 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:54:56.022486 1783997 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-162750 NodeName:old-k8s-version-162750 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:54:56.022643 1783997 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-162750"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:54:56.022714 1783997 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 10:54:56.031515 1783997 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:54:56.031590 1783997 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:54:56.040467 1783997 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1123 10:54:56.054737 1783997 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:54:56.069711 1783997 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1123 10:54:56.085335 1783997 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:54:56.089252 1783997 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:54:56.099679 1783997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:54:56.231828 1783997 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:54:56.250164 1783997 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750 for IP: 192.168.85.2
	I1123 10:54:56.250236 1783997 certs.go:195] generating shared ca certs ...
	I1123 10:54:56.250266 1783997 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.250450 1783997 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 10:54:56.250524 1783997 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 10:54:56.250557 1783997 certs.go:257] generating profile certs ...
	I1123 10:54:56.250632 1783997 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.key
	I1123 10:54:56.250670 1783997 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt with IP's: []
	I1123 10:54:56.568517 1783997 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt ...
	I1123 10:54:56.568551 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: {Name:mk160e046d920c647b09293b52a55655d4f79645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.568755 1783997 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.key ...
	I1123 10:54:56.568771 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.key: {Name:mkcf59a92983eb562e64ef836dbe11b8eebc9090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.568869 1783997 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key.9a4a4bf9
	I1123 10:54:56.568889 1783997 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt.9a4a4bf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 10:54:56.858965 1783997 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt.9a4a4bf9 ...
	I1123 10:54:56.858994 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt.9a4a4bf9: {Name:mk074c30cec604803cd4dceea20cabf9824439f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.859197 1783997 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key.9a4a4bf9 ...
	I1123 10:54:56.859210 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key.9a4a4bf9: {Name:mkf618c3e44d5bab75985d611632bd8af39340de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.859310 1783997 certs.go:382] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt.9a4a4bf9 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt
	I1123 10:54:56.859391 1783997 certs.go:386] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key.9a4a4bf9 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key
	I1123 10:54:56.859449 1783997 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.key
	I1123 10:54:56.859466 1783997 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.crt with IP's: []
	I1123 10:54:57.068178 1783997 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.crt ...
	I1123 10:54:57.068207 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.crt: {Name:mk2e7a4c936e1d1dac560fcaa9cb1621ab7cb5b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:57.068389 1783997 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.key ...
	I1123 10:54:57.068402 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.key: {Name:mkb3bf3285704f3eef03f7d9bab92686c229ead8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:57.068595 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 10:54:57.068641 1783997 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 10:54:57.068651 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:54:57.068677 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:54:57.068704 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:54:57.068730 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 10:54:57.068799 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:54:57.069357 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:54:57.089272 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:54:57.107948 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:54:57.128561 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:54:57.153366 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 10:54:57.171471 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:54:57.189606 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:54:57.211915 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:54:57.232653 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:54:57.250846 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 10:54:57.269192 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 10:54:57.287405 1783997 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:54:57.301602 1783997 ssh_runner.go:195] Run: openssl version
	I1123 10:54:57.307994 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 10:54:57.317490 1783997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 10:54:57.323882 1783997 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 10:54:57.323949 1783997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 10:54:57.365456 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 10:54:57.373849 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 10:54:57.382387 1783997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 10:54:57.386098 1783997 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 10:54:57.386204 1783997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 10:54:57.427012 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:54:57.435584 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:54:57.444131 1783997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:54:57.447981 1783997 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:54:57.448074 1783997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:54:57.489133 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:54:57.497721 1783997 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:54:57.501803 1783997 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:54:57.501856 1783997 kubeadm.go:401] StartCluster: {Name:old-k8s-version-162750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:54:57.501934 1783997 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 10:54:57.502002 1783997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:54:57.530473 1783997 cri.go:89] found id: ""
	I1123 10:54:57.530624 1783997 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:54:57.538954 1783997 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:54:57.546859 1783997 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:54:57.546928 1783997 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:54:57.555016 1783997 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:54:57.555084 1783997 kubeadm.go:158] found existing configuration files:
	
	I1123 10:54:57.555156 1783997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:54:57.563434 1783997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:54:57.563545 1783997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:54:57.571332 1783997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:54:57.579168 1783997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:54:57.579259 1783997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:54:57.586662 1783997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:54:57.594768 1783997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:54:57.594868 1783997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:54:57.602273 1783997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:54:57.610319 1783997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:54:57.610390 1783997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:54:57.617969 1783997 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:54:57.662566 1783997 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1123 10:54:57.662726 1783997 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:54:57.700306 1783997 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:54:57.700427 1783997 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:54:57.700488 1783997 kubeadm.go:319] OS: Linux
	I1123 10:54:57.700554 1783997 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:54:57.700629 1783997 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:54:57.700698 1783997 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:54:57.700792 1783997 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:54:57.700861 1783997 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:54:57.700946 1783997 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:54:57.701011 1783997 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:54:57.701089 1783997 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:54:57.701156 1783997 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:54:57.785554 1783997 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:54:57.785678 1783997 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:54:57.785792 1783997 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1123 10:54:57.967860 1783997 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:54:57.974440 1783997 out.go:252]   - Generating certificates and keys ...
	I1123 10:54:57.974603 1783997 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:54:57.974714 1783997 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:54:58.465232 1783997 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:54:58.683296 1783997 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:54:59.280296 1783997 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:54:59.791233 1783997 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:55:00.165766 1783997 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:55:00.165918 1783997 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-162750] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:55:01.066202 1783997 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:55:01.066469 1783997 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-162750] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:55:01.561871 1783997 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:55:02.017621 1783997 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:55:02.474583 1783997 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:55:02.475152 1783997 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:55:02.836024 1783997 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:55:03.718945 1783997 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:55:04.189939 1783997 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:55:04.931734 1783997 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:55:04.932956 1783997 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:55:04.936030 1783997 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:55:04.939746 1783997 out.go:252]   - Booting up control plane ...
	I1123 10:55:04.939857 1783997 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:55:04.939962 1783997 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:55:04.941019 1783997 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:55:04.961212 1783997 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:55:04.961315 1783997 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:55:04.961359 1783997 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:55:05.101616 1783997 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 10:55:14.605046 1783997 kubeadm.go:319] [apiclient] All control plane components are healthy after 9.503837 seconds
	I1123 10:55:14.605170 1783997 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:55:14.625306 1783997 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:55:15.168835 1783997 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:55:15.169046 1783997 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-162750 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:55:15.681392 1783997 kubeadm.go:319] [bootstrap-token] Using token: b6ms7a.52dw6vj4aucktnza
	I1123 10:55:15.684473 1783997 out.go:252]   - Configuring RBAC rules ...
	I1123 10:55:15.684620 1783997 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:55:15.691075 1783997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:55:15.700139 1783997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:55:15.714561 1783997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:55:15.722689 1783997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:55:15.729225 1783997 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:55:15.752397 1783997 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:55:16.155418 1783997 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:55:16.212380 1783997 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:55:16.214126 1783997 kubeadm.go:319] 
	I1123 10:55:16.214205 1783997 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:55:16.214216 1783997 kubeadm.go:319] 
	I1123 10:55:16.214293 1783997 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:55:16.214302 1783997 kubeadm.go:319] 
	I1123 10:55:16.214342 1783997 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:55:16.214416 1783997 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:55:16.214474 1783997 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:55:16.214483 1783997 kubeadm.go:319] 
	I1123 10:55:16.214537 1783997 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:55:16.214543 1783997 kubeadm.go:319] 
	I1123 10:55:16.214592 1783997 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:55:16.214600 1783997 kubeadm.go:319] 
	I1123 10:55:16.214652 1783997 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:55:16.214731 1783997 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:55:16.214810 1783997 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:55:16.214817 1783997 kubeadm.go:319] 
	I1123 10:55:16.214901 1783997 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:55:16.214982 1783997 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:55:16.214988 1783997 kubeadm.go:319] 
	I1123 10:55:16.215072 1783997 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token b6ms7a.52dw6vj4aucktnza \
	I1123 10:55:16.215244 1783997 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 \
	I1123 10:55:16.215274 1783997 kubeadm.go:319] 	--control-plane 
	I1123 10:55:16.215280 1783997 kubeadm.go:319] 
	I1123 10:55:16.215371 1783997 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:55:16.215379 1783997 kubeadm.go:319] 
	I1123 10:55:16.215461 1783997 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token b6ms7a.52dw6vj4aucktnza \
	I1123 10:55:16.215567 1783997 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 
	I1123 10:55:16.220081 1783997 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:55:16.220204 1783997 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:55:16.220225 1783997 cni.go:84] Creating CNI manager for ""
	I1123 10:55:16.220237 1783997 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:55:16.223476 1783997 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:55:16.226383 1783997 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:55:16.231956 1783997 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 10:55:16.231984 1783997 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:55:16.261607 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:55:17.575403 1783997 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.313763507s)
	I1123 10:55:17.575444 1783997 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:55:17.575566 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:17.575660 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-162750 minikube.k8s.io/updated_at=2025_11_23T10_55_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=old-k8s-version-162750 minikube.k8s.io/primary=true
	I1123 10:55:17.773918 1783997 ops.go:34] apiserver oom_adj: -16
	I1123 10:55:17.774078 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:18.274586 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:18.774782 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:19.274628 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:19.774305 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:20.274200 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:20.774754 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:21.274587 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:21.774695 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:22.274692 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:22.774493 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:23.274487 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:23.774994 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:24.274216 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:24.774862 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:25.274883 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:25.775167 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:26.274163 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:26.774629 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:27.274395 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:27.774170 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:28.274787 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:28.453896 1783997 kubeadm.go:1114] duration metric: took 10.878380592s to wait for elevateKubeSystemPrivileges
	I1123 10:55:28.453924 1783997 kubeadm.go:403] duration metric: took 30.952072121s to StartCluster
	I1123 10:55:28.453940 1783997 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:55:28.453999 1783997 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:55:28.454990 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:55:28.455237 1783997 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:55:28.455398 1783997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:55:28.455651 1783997 config.go:182] Loaded profile config "old-k8s-version-162750": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 10:55:28.455688 1783997 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:55:28.455746 1783997 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-162750"
	I1123 10:55:28.455759 1783997 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-162750"
	I1123 10:55:28.455778 1783997 host.go:66] Checking if "old-k8s-version-162750" exists ...
	I1123 10:55:28.456266 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:55:28.456747 1783997 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-162750"
	I1123 10:55:28.456764 1783997 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-162750"
	I1123 10:55:28.457044 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:55:28.461189 1783997 out.go:179] * Verifying Kubernetes components...
	I1123 10:55:28.464749 1783997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:55:28.496981 1783997 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:55:28.499578 1783997 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-162750"
	I1123 10:55:28.499615 1783997 host.go:66] Checking if "old-k8s-version-162750" exists ...
	I1123 10:55:28.501843 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:55:28.502111 1783997 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:55:28.502126 1783997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:55:28.502165 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:55:28.543332 1783997 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:55:28.543355 1783997 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:55:28.543416 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:55:28.545228 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:55:28.576806 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:55:28.784725 1783997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:55:28.809554 1783997 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:55:28.815162 1783997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:55:28.824290 1783997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:55:29.662463 1783997 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:55:29.664108 1783997 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-162750" to be "Ready" ...
	I1123 10:55:30.126374 1783997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.302009167s)
	I1123 10:55:30.129697 1783997 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 10:55:30.132617 1783997 addons.go:530] duration metric: took 1.676919337s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 10:55:30.167772 1783997 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-162750" context rescaled to 1 replicas
	W1123 10:55:31.668910 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	W1123 10:55:34.167440 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	W1123 10:55:36.167739 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	W1123 10:55:38.668076 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	W1123 10:55:41.168316 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	I1123 10:55:42.169908 1783997 node_ready.go:49] node "old-k8s-version-162750" is "Ready"
	I1123 10:55:42.169951 1783997 node_ready.go:38] duration metric: took 12.505541365s for node "old-k8s-version-162750" to be "Ready" ...
	I1123 10:55:42.169971 1783997 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:55:42.170085 1783997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:55:42.202531 1783997 api_server.go:72] duration metric: took 13.747259624s to wait for apiserver process to appear ...
	I1123 10:55:42.202563 1783997 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:55:42.202588 1783997 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:55:42.218720 1783997 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:55:42.220782 1783997 api_server.go:141] control plane version: v1.28.0
	I1123 10:55:42.220908 1783997 api_server.go:131] duration metric: took 18.335434ms to wait for apiserver health ...
	I1123 10:55:42.220947 1783997 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:55:42.227364 1783997 system_pods.go:59] 8 kube-system pods found
	I1123 10:55:42.227483 1783997 system_pods.go:61] "coredns-5dd5756b68-cxm6d" [5bb94c83-477d-49aa-9ade-b2404e214905] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:55:42.227507 1783997 system_pods.go:61] "etcd-old-k8s-version-162750" [aced8707-2bd1-4ee1-98fe-311294917440] Running
	I1123 10:55:42.227551 1783997 system_pods.go:61] "kindnet-zb6c5" [2801a8b0-1e0a-4426-be8c-07fd89dff52f] Running
	I1123 10:55:42.227577 1783997 system_pods.go:61] "kube-apiserver-old-k8s-version-162750" [1350c454-aabc-4ecc-b8c0-230c87e88fb5] Running
	I1123 10:55:42.227599 1783997 system_pods.go:61] "kube-controller-manager-old-k8s-version-162750" [67fd41b0-4c9f-4e0a-93f8-d3d298a13ce6] Running
	I1123 10:55:42.227636 1783997 system_pods.go:61] "kube-proxy-79b2j" [e6211a98-e130-4ef0-b3b4-25ab09219fd4] Running
	I1123 10:55:42.227661 1783997 system_pods.go:61] "kube-scheduler-old-k8s-version-162750" [0559d534-1bf5-49e8-871e-48080b9375ee] Running
	I1123 10:55:42.227685 1783997 system_pods.go:61] "storage-provisioner" [3c9ddafc-e744-4085-ab89-dace2cd10a03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:55:42.227724 1783997 system_pods.go:74] duration metric: took 6.734708ms to wait for pod list to return data ...
	I1123 10:55:42.227754 1783997 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:55:42.233192 1783997 default_sa.go:45] found service account: "default"
	I1123 10:55:42.233282 1783997 default_sa.go:55] duration metric: took 5.507083ms for default service account to be created ...
	I1123 10:55:42.233310 1783997 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:55:42.241449 1783997 system_pods.go:86] 8 kube-system pods found
	I1123 10:55:42.241556 1783997 system_pods.go:89] "coredns-5dd5756b68-cxm6d" [5bb94c83-477d-49aa-9ade-b2404e214905] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:55:42.241581 1783997 system_pods.go:89] "etcd-old-k8s-version-162750" [aced8707-2bd1-4ee1-98fe-311294917440] Running
	I1123 10:55:42.241619 1783997 system_pods.go:89] "kindnet-zb6c5" [2801a8b0-1e0a-4426-be8c-07fd89dff52f] Running
	I1123 10:55:42.241647 1783997 system_pods.go:89] "kube-apiserver-old-k8s-version-162750" [1350c454-aabc-4ecc-b8c0-230c87e88fb5] Running
	I1123 10:55:42.241670 1783997 system_pods.go:89] "kube-controller-manager-old-k8s-version-162750" [67fd41b0-4c9f-4e0a-93f8-d3d298a13ce6] Running
	I1123 10:55:42.241704 1783997 system_pods.go:89] "kube-proxy-79b2j" [e6211a98-e130-4ef0-b3b4-25ab09219fd4] Running
	I1123 10:55:42.241730 1783997 system_pods.go:89] "kube-scheduler-old-k8s-version-162750" [0559d534-1bf5-49e8-871e-48080b9375ee] Running
	I1123 10:55:42.241754 1783997 system_pods.go:89] "storage-provisioner" [3c9ddafc-e744-4085-ab89-dace2cd10a03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:55:42.241813 1783997 retry.go:31] will retry after 211.209028ms: missing components: kube-dns
	I1123 10:55:42.467715 1783997 system_pods.go:86] 8 kube-system pods found
	I1123 10:55:42.467795 1783997 system_pods.go:89] "coredns-5dd5756b68-cxm6d" [5bb94c83-477d-49aa-9ade-b2404e214905] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:55:42.467816 1783997 system_pods.go:89] "etcd-old-k8s-version-162750" [aced8707-2bd1-4ee1-98fe-311294917440] Running
	I1123 10:55:42.467872 1783997 system_pods.go:89] "kindnet-zb6c5" [2801a8b0-1e0a-4426-be8c-07fd89dff52f] Running
	I1123 10:55:42.467895 1783997 system_pods.go:89] "kube-apiserver-old-k8s-version-162750" [1350c454-aabc-4ecc-b8c0-230c87e88fb5] Running
	I1123 10:55:42.467914 1783997 system_pods.go:89] "kube-controller-manager-old-k8s-version-162750" [67fd41b0-4c9f-4e0a-93f8-d3d298a13ce6] Running
	I1123 10:55:42.467934 1783997 system_pods.go:89] "kube-proxy-79b2j" [e6211a98-e130-4ef0-b3b4-25ab09219fd4] Running
	I1123 10:55:42.467975 1783997 system_pods.go:89] "kube-scheduler-old-k8s-version-162750" [0559d534-1bf5-49e8-871e-48080b9375ee] Running
	I1123 10:55:42.468004 1783997 system_pods.go:89] "storage-provisioner" [3c9ddafc-e744-4085-ab89-dace2cd10a03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:55:42.468048 1783997 retry.go:31] will retry after 353.778957ms: missing components: kube-dns
	I1123 10:55:42.826445 1783997 system_pods.go:86] 8 kube-system pods found
	I1123 10:55:42.826475 1783997 system_pods.go:89] "coredns-5dd5756b68-cxm6d" [5bb94c83-477d-49aa-9ade-b2404e214905] Running
	I1123 10:55:42.826482 1783997 system_pods.go:89] "etcd-old-k8s-version-162750" [aced8707-2bd1-4ee1-98fe-311294917440] Running
	I1123 10:55:42.826486 1783997 system_pods.go:89] "kindnet-zb6c5" [2801a8b0-1e0a-4426-be8c-07fd89dff52f] Running
	I1123 10:55:42.826518 1783997 system_pods.go:89] "kube-apiserver-old-k8s-version-162750" [1350c454-aabc-4ecc-b8c0-230c87e88fb5] Running
	I1123 10:55:42.826531 1783997 system_pods.go:89] "kube-controller-manager-old-k8s-version-162750" [67fd41b0-4c9f-4e0a-93f8-d3d298a13ce6] Running
	I1123 10:55:42.826535 1783997 system_pods.go:89] "kube-proxy-79b2j" [e6211a98-e130-4ef0-b3b4-25ab09219fd4] Running
	I1123 10:55:42.826539 1783997 system_pods.go:89] "kube-scheduler-old-k8s-version-162750" [0559d534-1bf5-49e8-871e-48080b9375ee] Running
	I1123 10:55:42.826571 1783997 system_pods.go:89] "storage-provisioner" [3c9ddafc-e744-4085-ab89-dace2cd10a03] Running
	I1123 10:55:42.826608 1783997 system_pods.go:126] duration metric: took 593.269633ms to wait for k8s-apps to be running ...
	I1123 10:55:42.826622 1783997 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:55:42.826696 1783997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:55:42.839784 1783997 system_svc.go:56] duration metric: took 13.15374ms WaitForService to wait for kubelet
	I1123 10:55:42.839811 1783997 kubeadm.go:587] duration metric: took 14.384548279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:55:42.839830 1783997 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:55:42.842751 1783997 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:55:42.842784 1783997 node_conditions.go:123] node cpu capacity is 2
	I1123 10:55:42.842798 1783997 node_conditions.go:105] duration metric: took 2.942496ms to run NodePressure ...
	I1123 10:55:42.842827 1783997 start.go:242] waiting for startup goroutines ...
	I1123 10:55:42.842841 1783997 start.go:247] waiting for cluster config update ...
	I1123 10:55:42.842864 1783997 start.go:256] writing updated cluster config ...
	I1123 10:55:42.843149 1783997 ssh_runner.go:195] Run: rm -f paused
	I1123 10:55:42.846757 1783997 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:55:42.851146 1783997 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-cxm6d" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.856441 1783997 pod_ready.go:94] pod "coredns-5dd5756b68-cxm6d" is "Ready"
	I1123 10:55:42.856475 1783997 pod_ready.go:86] duration metric: took 5.30507ms for pod "coredns-5dd5756b68-cxm6d" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.859545 1783997 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.864209 1783997 pod_ready.go:94] pod "etcd-old-k8s-version-162750" is "Ready"
	I1123 10:55:42.864234 1783997 pod_ready.go:86] duration metric: took 4.663875ms for pod "etcd-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.867129 1783997 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.872330 1783997 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-162750" is "Ready"
	I1123 10:55:42.872353 1783997 pod_ready.go:86] duration metric: took 5.198997ms for pod "kube-apiserver-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.875324 1783997 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:43.251085 1783997 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-162750" is "Ready"
	I1123 10:55:43.251127 1783997 pod_ready.go:86] duration metric: took 375.783039ms for pod "kube-controller-manager-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:43.452047 1783997 pod_ready.go:83] waiting for pod "kube-proxy-79b2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:43.851383 1783997 pod_ready.go:94] pod "kube-proxy-79b2j" is "Ready"
	I1123 10:55:43.851463 1783997 pod_ready.go:86] duration metric: took 399.390198ms for pod "kube-proxy-79b2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:44.051690 1783997 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:44.450999 1783997 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-162750" is "Ready"
	I1123 10:55:44.451024 1783997 pod_ready.go:86] duration metric: took 399.305853ms for pod "kube-scheduler-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:44.451038 1783997 pod_ready.go:40] duration metric: took 1.604247069s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:55:44.520151 1783997 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 10:55:44.523300 1783997 out.go:203] 
	W1123 10:55:44.526225 1783997 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:55:44.529256 1783997 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:55:44.532207 1783997 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-162750" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ba06948ce0b13       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   7b6f4396c2ac4       busybox                                          default
	5a70cbe1d834b       ba04bb24b9575       12 seconds ago      Running             storage-provisioner       0                   a70aca4724ab5       storage-provisioner                              kube-system
	bc5fe6488aa4a       97e04611ad434       12 seconds ago      Running             coredns                   0                   4adbe96b4b541       coredns-5dd5756b68-cxm6d                         kube-system
	9b339c5b61e20       b1a8c6f707935       23 seconds ago      Running             kindnet-cni               0                   286a1b0362bc9       kindnet-zb6c5                                    kube-system
	5edae851933e1       940f54a5bcae9       25 seconds ago      Running             kube-proxy                0                   f64a9d6491bd6       kube-proxy-79b2j                                 kube-system
	86e0ec376568c       9cdd6470f48c8       46 seconds ago      Running             etcd                      0                   6326fefcfce67       etcd-old-k8s-version-162750                      kube-system
	2c46626cae965       00543d2fe5d71       46 seconds ago      Running             kube-apiserver            0                   86c41d194d43d       kube-apiserver-old-k8s-version-162750            kube-system
	b172ecbbc92ae       46cc66ccc7c19       46 seconds ago      Running             kube-controller-manager   0                   2e69a304f2302       kube-controller-manager-old-k8s-version-162750   kube-system
	8fe87db64994a       762dce4090c5f       47 seconds ago      Running             kube-scheduler            0                   47d58bf656f5a       kube-scheduler-old-k8s-version-162750            kube-system
	
	
	==> containerd <==
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.305269997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:3c9ddafc-e744-4085-ab89-dace2cd10a03,Namespace:kube-system,Attempt:0,} returns sandbox id \"a70aca4724ab5c57d7e45ea572a79a2152282f9a26c6c2ae65163ab53bc60287\""
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.322518964Z" level=info msg="CreateContainer within sandbox \"a70aca4724ab5c57d7e45ea572a79a2152282f9a26c6c2ae65163ab53bc60287\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.343620324Z" level=info msg="Container 5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.355157815Z" level=info msg="CreateContainer within sandbox \"a70aca4724ab5c57d7e45ea572a79a2152282f9a26c6c2ae65163ab53bc60287\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77\""
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.358862686Z" level=info msg="StartContainer for \"5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77\""
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.362313229Z" level=info msg="connecting to shim 5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77" address="unix:///run/containerd/s/45d05dd539374b013fce28de2d923c29a1eb650ec6ca7883ae2b1c38a9c10251" protocol=ttrpc version=3
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.427452007Z" level=info msg="StartContainer for \"bc5fe6488aa4a4f759401ceb92d86b05c69c3c6e8091bc15dcae014de37ad281\" returns successfully"
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.452264178Z" level=info msg="StartContainer for \"5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77\" returns successfully"
	Nov 23 10:55:45 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:45.057875535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2dd7549c-5bf6-4864-9a27-188c6854aedd,Namespace:default,Attempt:0,}"
	Nov 23 10:55:45 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:45.136262770Z" level=info msg="connecting to shim 7b6f4396c2ac4e649170e7f117eabde64ed25ed0f5d6e9fb9a05c0d5cfb58f48" address="unix:///run/containerd/s/e1101a7b12b9df4e8726eedc27453ba7cd17794c35a04be3e7f281e3b70ad2a6" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 10:55:45 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:45.247281112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2dd7549c-5bf6-4864-9a27-188c6854aedd,Namespace:default,Attempt:0,} returns sandbox id \"7b6f4396c2ac4e649170e7f117eabde64ed25ed0f5d6e9fb9a05c0d5cfb58f48\""
	Nov 23 10:55:45 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:45.251369979Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.455900600Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.460607190Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.462553398Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.466379668Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.467037569Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.215402302s"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.467248320Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.469006842Z" level=info msg="CreateContainer within sandbox \"7b6f4396c2ac4e649170e7f117eabde64ed25ed0f5d6e9fb9a05c0d5cfb58f48\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.482365147Z" level=info msg="Container ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.492750868Z" level=info msg="CreateContainer within sandbox \"7b6f4396c2ac4e649170e7f117eabde64ed25ed0f5d6e9fb9a05c0d5cfb58f48\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0\""
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.493687604Z" level=info msg="StartContainer for \"ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0\""
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.494806735Z" level=info msg="connecting to shim ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0" address="unix:///run/containerd/s/e1101a7b12b9df4e8726eedc27453ba7cd17794c35a04be3e7f281e3b70ad2a6" protocol=ttrpc version=3
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.568067430Z" level=info msg="StartContainer for \"ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0\" returns successfully"
	Nov 23 10:55:53 old-k8s-version-162750 containerd[760]: E1123 10:55:53.878172     760 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [bc5fe6488aa4a4f759401ceb92d86b05c69c3c6e8091bc15dcae014de37ad281] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48632 - 24416 "HINFO IN 3024842233634058345.7861179067670114405. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.057365856s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-162750
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-162750
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-162750
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_55_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:55:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-162750
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:55:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:55:47 +0000   Sun, 23 Nov 2025 10:55:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:55:47 +0000   Sun, 23 Nov 2025 10:55:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:55:47 +0000   Sun, 23 Nov 2025 10:55:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:55:47 +0000   Sun, 23 Nov 2025 10:55:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-162750
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                bfae4691-d726-46e0-afa3-b816e5402bb4
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-cxm6d                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-162750                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-zb6c5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-162750             250m (12%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-controller-manager-old-k8s-version-162750    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-79b2j                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-162750             100m (5%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 39s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-162750 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-162750 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-162750 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-162750 event: Registered Node old-k8s-version-162750 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-162750 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:50] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [86e0ec376568c81aa8ee0cf9c45f122bfb574eb7aaf9980bcd17e7f6a947b65d] <==
	{"level":"info","ts":"2025-11-23T10:55:08.414356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T10:55:08.414618Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T10:55:08.431851Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T10:55:08.431918Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T10:55:08.432071Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T10:55:08.435999Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T10:55:08.436237Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T10:55:08.459276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T10:55:08.459497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T10:55:08.459611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-23T10:55:08.459739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T10:55:08.45982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T10:55:08.459907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-23T10:55:08.459994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T10:55:08.461288Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:55:08.471141Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-162750 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T10:55:08.471348Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:55:08.472539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T10:55:08.472912Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:55:08.473132Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T10:55:08.473239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T10:55:08.474392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-23T10:55:08.475232Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:55:08.475536Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:55:08.487789Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 10:55:55 up 11:38,  0 user,  load average: 3.20, 3.56, 2.94
	Linux old-k8s-version-162750 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9b339c5b61e202c45c3fdc95ba0af753644dca4aef308dc5f15111615f78f8af] <==
	I1123 10:55:31.429904       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:55:31.431758       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:55:31.431908       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:55:31.431920       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:55:31.432460       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:55:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:55:31.632322       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:55:31.632353       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:55:31.632364       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:55:31.633553       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:55:31.832439       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:55:31.832469       1 metrics.go:72] Registering metrics
	I1123 10:55:31.832663       1 controller.go:711] "Syncing nftables rules"
	I1123 10:55:41.640548       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:55:41.640815       1 main.go:301] handling current node
	I1123 10:55:51.632089       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:55:51.632119       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2c46626cae965cc9b9ae2b696041e30b151618e4aee79c75568edaa726197a16] <==
	I1123 10:55:12.878120       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 10:55:12.878218       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 10:55:12.878522       1 aggregator.go:166] initial CRD sync complete...
	I1123 10:55:12.878662       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 10:55:12.878756       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:55:12.878858       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:55:12.879423       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 10:55:12.879632       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 10:55:12.882027       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 10:55:13.080943       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:55:13.487590       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:55:13.492827       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:55:13.492850       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:55:14.147673       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:55:14.199257       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:55:14.319375       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:55:14.330386       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 10:55:14.331544       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 10:55:14.338132       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:55:14.657050       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 10:55:16.129805       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 10:55:16.154132       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:55:16.165483       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 10:55:27.550634       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 10:55:28.249408       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b172ecbbc92ae00ae0576b5c91f5300dd1600c68cedc8d7906fefff317c2b2ad] <==
	I1123 10:55:27.569285       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 10:55:27.575276       1 shared_informer.go:318] Caches are synced for attach detach
	I1123 10:55:27.588832       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1123 10:55:27.638851       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1123 10:55:28.054052       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:55:28.084968       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:55:28.085001       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 10:55:28.263305       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zb6c5"
	I1123 10:55:28.266556       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-79b2j"
	I1123 10:55:28.407913       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bxh4c"
	I1123 10:55:28.428739       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-cxm6d"
	I1123 10:55:28.448302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="892.411145ms"
	I1123 10:55:28.513752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.199822ms"
	I1123 10:55:28.513850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.506µs"
	I1123 10:55:29.726843       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 10:55:29.762403       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bxh4c"
	I1123 10:55:29.773596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.578157ms"
	I1123 10:55:29.792150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.503716ms"
	I1123 10:55:29.793161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.295µs"
	I1123 10:55:41.726998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.553µs"
	I1123 10:55:41.744627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.644µs"
	I1123 10:55:42.457536       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 10:55:42.603288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.715µs"
	I1123 10:55:42.660947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.283125ms"
	I1123 10:55:42.661290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.252µs"
	
	
	==> kube-proxy [5edae851933e164429106b42e9db4cd11398e2cef913bcace0a893ce92ae2a64] <==
	I1123 10:55:29.389141       1 server_others.go:69] "Using iptables proxy"
	I1123 10:55:29.405946       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 10:55:29.507964       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:55:29.509971       1 server_others.go:152] "Using iptables Proxier"
	I1123 10:55:29.510014       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 10:55:29.510087       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 10:55:29.510129       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 10:55:29.510338       1 server.go:846] "Version info" version="v1.28.0"
	I1123 10:55:29.510355       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:55:29.512734       1 config.go:188] "Starting service config controller"
	I1123 10:55:29.512760       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 10:55:29.512781       1 config.go:97] "Starting endpoint slice config controller"
	I1123 10:55:29.512786       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 10:55:29.519667       1 config.go:315] "Starting node config controller"
	I1123 10:55:29.519694       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 10:55:29.613894       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 10:55:29.613956       1 shared_informer.go:318] Caches are synced for service config
	I1123 10:55:29.620220       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8fe87db64994a9a27004272a4bd3ef17202de2d385ebec44e1f454df96ff18cc] <==
	W1123 10:55:13.046294       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 10:55:13.046318       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:55:13.046380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 10:55:13.046398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 10:55:13.046455       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 10:55:13.046471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 10:55:13.046520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 10:55:13.046535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 10:55:13.046644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 10:55:13.046660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 10:55:13.046716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 10:55:13.046735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 10:55:13.046810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 10:55:13.046825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 10:55:13.046875       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 10:55:13.046889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 10:55:13.046944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 10:55:13.046959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 10:55:13.047021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 10:55:13.047037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1123 10:55:13.865873       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 10:55:13.866133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 10:55:13.914014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 10:55:13.914052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1123 10:55:14.635417       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 10:55:27 old-k8s-version-162750 kubelet[1545]: I1123 10:55:27.529028    1545 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 10:55:27 old-k8s-version-162750 kubelet[1545]: I1123 10:55:27.529618    1545 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.271914    1545 topology_manager.go:215] "Topology Admit Handler" podUID="2801a8b0-1e0a-4426-be8c-07fd89dff52f" podNamespace="kube-system" podName="kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.302314    1545 topology_manager.go:215] "Topology Admit Handler" podUID="e6211a98-e130-4ef0-b3b4-25ab09219fd4" podNamespace="kube-system" podName="kube-proxy-79b2j"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399504    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6211a98-e130-4ef0-b3b4-25ab09219fd4-xtables-lock\") pod \"kube-proxy-79b2j\" (UID: \"e6211a98-e130-4ef0-b3b4-25ab09219fd4\") " pod="kube-system/kube-proxy-79b2j"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399566    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6211a98-e130-4ef0-b3b4-25ab09219fd4-lib-modules\") pod \"kube-proxy-79b2j\" (UID: \"e6211a98-e130-4ef0-b3b4-25ab09219fd4\") " pod="kube-system/kube-proxy-79b2j"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399601    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6211a98-e130-4ef0-b3b4-25ab09219fd4-kube-proxy\") pod \"kube-proxy-79b2j\" (UID: \"e6211a98-e130-4ef0-b3b4-25ab09219fd4\") " pod="kube-system/kube-proxy-79b2j"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399647    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2801a8b0-1e0a-4426-be8c-07fd89dff52f-cni-cfg\") pod \"kindnet-zb6c5\" (UID: \"2801a8b0-1e0a-4426-be8c-07fd89dff52f\") " pod="kube-system/kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399674    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2801a8b0-1e0a-4426-be8c-07fd89dff52f-xtables-lock\") pod \"kindnet-zb6c5\" (UID: \"2801a8b0-1e0a-4426-be8c-07fd89dff52f\") " pod="kube-system/kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399699    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2801a8b0-1e0a-4426-be8c-07fd89dff52f-lib-modules\") pod \"kindnet-zb6c5\" (UID: \"2801a8b0-1e0a-4426-be8c-07fd89dff52f\") " pod="kube-system/kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399739    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxpns\" (UniqueName: \"kubernetes.io/projected/2801a8b0-1e0a-4426-be8c-07fd89dff52f-kube-api-access-sxpns\") pod \"kindnet-zb6c5\" (UID: \"2801a8b0-1e0a-4426-be8c-07fd89dff52f\") " pod="kube-system/kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399767    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k45dk\" (UniqueName: \"kubernetes.io/projected/e6211a98-e130-4ef0-b3b4-25ab09219fd4-kube-api-access-k45dk\") pod \"kube-proxy-79b2j\" (UID: \"e6211a98-e130-4ef0-b3b4-25ab09219fd4\") " pod="kube-system/kube-proxy-79b2j"
	Nov 23 10:55:31 old-k8s-version-162750 kubelet[1545]: I1123 10:55:31.579745    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-79b2j" podStartSLOduration=3.579701386 podCreationTimestamp="2025-11-23 10:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:55:29.57435474 +0000 UTC m=+13.495910640" watchObservedRunningTime="2025-11-23 10:55:31.579701386 +0000 UTC m=+15.501257278"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.689932    1545 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.722056    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-zb6c5" podStartSLOduration=11.778808213 podCreationTimestamp="2025-11-23 10:55:28 +0000 UTC" firstStartedPulling="2025-11-23 10:55:29.166586204 +0000 UTC m=+13.088142096" lastFinishedPulling="2025-11-23 10:55:31.109797113 +0000 UTC m=+15.031353004" observedRunningTime="2025-11-23 10:55:31.581628027 +0000 UTC m=+15.503183918" watchObservedRunningTime="2025-11-23 10:55:41.722019121 +0000 UTC m=+25.643575013"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.722408    1545 topology_manager.go:215] "Topology Admit Handler" podUID="5bb94c83-477d-49aa-9ade-b2404e214905" podNamespace="kube-system" podName="coredns-5dd5756b68-cxm6d"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.728765    1545 topology_manager.go:215] "Topology Admit Handler" podUID="3c9ddafc-e744-4085-ab89-dace2cd10a03" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.806388    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bb94c83-477d-49aa-9ade-b2404e214905-config-volume\") pod \"coredns-5dd5756b68-cxm6d\" (UID: \"5bb94c83-477d-49aa-9ade-b2404e214905\") " pod="kube-system/coredns-5dd5756b68-cxm6d"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.806474    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69f29\" (UniqueName: \"kubernetes.io/projected/5bb94c83-477d-49aa-9ade-b2404e214905-kube-api-access-69f29\") pod \"coredns-5dd5756b68-cxm6d\" (UID: \"5bb94c83-477d-49aa-9ade-b2404e214905\") " pod="kube-system/coredns-5dd5756b68-cxm6d"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.806515    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3c9ddafc-e744-4085-ab89-dace2cd10a03-tmp\") pod \"storage-provisioner\" (UID: \"3c9ddafc-e744-4085-ab89-dace2cd10a03\") " pod="kube-system/storage-provisioner"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.806548    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjr8\" (UniqueName: \"kubernetes.io/projected/3c9ddafc-e744-4085-ab89-dace2cd10a03-kube-api-access-dkjr8\") pod \"storage-provisioner\" (UID: \"3c9ddafc-e744-4085-ab89-dace2cd10a03\") " pod="kube-system/storage-provisioner"
	Nov 23 10:55:42 old-k8s-version-162750 kubelet[1545]: I1123 10:55:42.625897    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cxm6d" podStartSLOduration=14.625853822 podCreationTimestamp="2025-11-23 10:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:55:42.604047235 +0000 UTC m=+26.525603126" watchObservedRunningTime="2025-11-23 10:55:42.625853822 +0000 UTC m=+26.547409722"
	Nov 23 10:55:44 old-k8s-version-162750 kubelet[1545]: I1123 10:55:44.753352    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.753311461 podCreationTimestamp="2025-11-23 10:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:55:42.66406931 +0000 UTC m=+26.585625210" watchObservedRunningTime="2025-11-23 10:55:44.753311461 +0000 UTC m=+28.674867352"
	Nov 23 10:55:44 old-k8s-version-162750 kubelet[1545]: I1123 10:55:44.753537    1545 topology_manager.go:215] "Topology Admit Handler" podUID="2dd7549c-5bf6-4864-9a27-188c6854aedd" podNamespace="default" podName="busybox"
	Nov 23 10:55:44 old-k8s-version-162750 kubelet[1545]: I1123 10:55:44.847562    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5t8\" (UniqueName: \"kubernetes.io/projected/2dd7549c-5bf6-4864-9a27-188c6854aedd-kube-api-access-9b5t8\") pod \"busybox\" (UID: \"2dd7549c-5bf6-4864-9a27-188c6854aedd\") " pod="default/busybox"
	
	
	==> storage-provisioner [5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77] <==
	I1123 10:55:42.457667       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:55:42.481260       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:55:42.481489       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 10:55:42.489314       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:55:42.489547       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162750_50e25990-0844-41df-ae82-fbe719da9f7e!
	I1123 10:55:42.490536       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b658e7d2-aa8d-4b62-a5b7-ea9d07cb7dad", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-162750_50e25990-0844-41df-ae82-fbe719da9f7e became leader
	I1123 10:55:42.590544       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162750_50e25990-0844-41df-ae82-fbe719da9f7e!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162750 -n old-k8s-version-162750
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-162750 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-162750
helpers_test.go:243: (dbg) docker inspect old-k8s-version-162750:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f",
	        "Created": "2025-11-23T10:54:51.953481943Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1784384,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:54:52.022835067Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f/hostname",
	        "HostsPath": "/var/lib/docker/containers/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f/hosts",
	        "LogPath": "/var/lib/docker/containers/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f/3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f-json.log",
	        "Name": "/old-k8s-version-162750",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-162750:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-162750",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3b748cbca934fd15518227856061285d1e2f3789570cb2556fc747f5b0c5906f",
	                "LowerDir": "/var/lib/docker/overlay2/c63ab6dc690b90f0078b1181c7b2482e6dae576e4a4a9931a0cf9180a42049dc-init/diff:/var/lib/docker/overlay2/fe0bef51c968206096993e9a75db2143cd9cd74d56696a257291ce63f851a2d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/c63ab6dc690b90f0078b1181c7b2482e6dae576e4a4a9931a0cf9180a42049dc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/c63ab6dc690b90f0078b1181c7b2482e6dae576e4a4a9931a0cf9180a42049dc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/c63ab6dc690b90f0078b1181c7b2482e6dae576e4a4a9931a0cf9180a42049dc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-162750",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-162750/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-162750",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-162750",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-162750",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1370920c28612b4b9ebc39eabef858b90e52f7c6a4afe5df6f209380389afe4b",
	            "SandboxKey": "/var/run/docker/netns/1370920c2861",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35254"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35255"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35258"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35256"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35257"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-162750": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8e:aa:61:1e:67:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2204e151d724b9aa254e197ae8f573fd169c40786f9413d1d5be71fa8ea2a8bd",
	                    "EndpointID": "36252d81aa94c0d5a35dc4c0eb261a48fafa705636f6aff8cab40a27543c011e",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-162750",
	                        "3b748cbca934"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162750 -n old-k8s-version-162750
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-162750 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-162750 logs -n 25: (1.208809214s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-378762 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo containerd config dump                                                                                                                                                                                                        │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ ssh     │ -p cilium-378762 sudo crio config                                                                                                                                                                                                                   │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ delete  │ -p cilium-378762                                                                                                                                                                                                                                    │ cilium-378762             │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ start   │ -p force-systemd-env-479166 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ delete  │ -p kubernetes-upgrade-871841                                                                                                                                                                                                                        │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-679101    │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ force-systemd-env-479166 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p force-systemd-env-479166                                                                                                                                                                                                                         │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p cert-options-501705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ cert-options-501705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ -p cert-options-501705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p cert-options-501705                                                                                                                                                                                                                              │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:55 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:54:45
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:54:45.778702 1783997 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:54:45.779263 1783997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:54:45.779278 1783997 out.go:374] Setting ErrFile to fd 2...
	I1123 10:54:45.779293 1783997 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:54:45.779694 1783997 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:54:45.780318 1783997 out.go:368] Setting JSON to false
	I1123 10:54:45.781475 1783997 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":41831,"bootTime":1763853455,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:54:45.781584 1783997 start.go:143] virtualization:  
	I1123 10:54:45.785225 1783997 out.go:179] * [old-k8s-version-162750] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:54:45.789850 1783997 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:54:45.789926 1783997 notify.go:221] Checking for updates...
	I1123 10:54:45.796654 1783997 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:54:45.799993 1783997 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:54:45.803378 1783997 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:54:45.806538 1783997 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:54:45.809836 1783997 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:54:45.813538 1783997 config.go:182] Loaded profile config "cert-expiration-679101": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:54:45.813697 1783997 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:54:45.861804 1783997 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:54:45.861945 1783997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:54:45.925616 1783997 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:54:45.915981674 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:54:45.925768 1783997 docker.go:319] overlay module found
	I1123 10:54:45.930988 1783997 out.go:179] * Using the docker driver based on user configuration
	I1123 10:54:45.934033 1783997 start.go:309] selected driver: docker
	I1123 10:54:45.934059 1783997 start.go:927] validating driver "docker" against <nil>
	I1123 10:54:45.934073 1783997 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:54:45.934843 1783997 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:54:45.994571 1783997 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:54:45.985733383 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:54:45.994732 1783997 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:54:45.995011 1783997 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:54:45.997946 1783997 out.go:179] * Using Docker driver with root privileges
	I1123 10:54:46.001501 1783997 cni.go:84] Creating CNI manager for ""
	I1123 10:54:46.001616 1783997 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:54:46.001629 1783997 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:54:46.001728 1783997 start.go:353] cluster config:
	{Name:old-k8s-version-162750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:54:46.005335 1783997 out.go:179] * Starting "old-k8s-version-162750" primary control-plane node in "old-k8s-version-162750" cluster
	I1123 10:54:46.008262 1783997 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 10:54:46.011249 1783997 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:54:46.014166 1783997 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 10:54:46.014230 1783997 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 10:54:46.014254 1783997 cache.go:65] Caching tarball of preloaded images
	I1123 10:54:46.014261 1783997 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:54:46.014340 1783997 preload.go:238] Found /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 10:54:46.014351 1783997 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 10:54:46.014459 1783997 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/config.json ...
	I1123 10:54:46.014476 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/config.json: {Name:mk5eef821183a362255c44f8410d633523a499ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:46.034721 1783997 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:54:46.034749 1783997 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:54:46.034782 1783997 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:54:46.034830 1783997 start.go:360] acquireMachinesLock for old-k8s-version-162750: {Name:mk0f3804e6ccc6cb84c4dea8eb218364814cd6db Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:54:46.034953 1783997 start.go:364] duration metric: took 100.469µs to acquireMachinesLock for "old-k8s-version-162750"
	I1123 10:54:46.034987 1783997 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-162750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:54:46.035063 1783997 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:54:46.040311 1783997 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:54:46.040562 1783997 start.go:159] libmachine.API.Create for "old-k8s-version-162750" (driver="docker")
	I1123 10:54:46.040603 1783997 client.go:173] LocalClient.Create starting
	I1123 10:54:46.040681 1783997 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem
	I1123 10:54:46.040726 1783997 main.go:143] libmachine: Decoding PEM data...
	I1123 10:54:46.040752 1783997 main.go:143] libmachine: Parsing certificate...
	I1123 10:54:46.040826 1783997 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem
	I1123 10:54:46.040852 1783997 main.go:143] libmachine: Decoding PEM data...
	I1123 10:54:46.040869 1783997 main.go:143] libmachine: Parsing certificate...
	I1123 10:54:46.041257 1783997 cli_runner.go:164] Run: docker network inspect old-k8s-version-162750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:54:46.058141 1783997 cli_runner.go:211] docker network inspect old-k8s-version-162750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:54:46.058237 1783997 network_create.go:284] running [docker network inspect old-k8s-version-162750] to gather additional debugging logs...
	I1123 10:54:46.058260 1783997 cli_runner.go:164] Run: docker network inspect old-k8s-version-162750
	W1123 10:54:46.075827 1783997 cli_runner.go:211] docker network inspect old-k8s-version-162750 returned with exit code 1
	I1123 10:54:46.075860 1783997 network_create.go:287] error running [docker network inspect old-k8s-version-162750]: docker network inspect old-k8s-version-162750: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-162750 not found
	I1123 10:54:46.075874 1783997 network_create.go:289] output of [docker network inspect old-k8s-version-162750]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-162750 not found
	
	** /stderr **
	I1123 10:54:46.075987 1783997 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:54:46.092827 1783997 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e44f782e1ead IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ae:ef:b1:2b:de} reservation:<nil>}
	I1123 10:54:46.093109 1783997 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d795300f262d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:f7:c2:f9:ad:5b} reservation:<nil>}
	I1123 10:54:46.093426 1783997 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4b6f246690b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:41:9a:79:92:5d} reservation:<nil>}
	I1123 10:54:46.093747 1783997 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c1baa3e8d750 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:36:db:69:d0:2a:57} reservation:<nil>}
	I1123 10:54:46.094196 1783997 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018f7cf0}
	I1123 10:54:46.094224 1783997 network_create.go:124] attempt to create docker network old-k8s-version-162750 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 10:54:46.094288 1783997 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-162750 old-k8s-version-162750
	I1123 10:54:46.153890 1783997 network_create.go:108] docker network old-k8s-version-162750 192.168.85.0/24 created
	I1123 10:54:46.153925 1783997 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-162750" container
	I1123 10:54:46.153999 1783997 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:54:46.169956 1783997 cli_runner.go:164] Run: docker volume create old-k8s-version-162750 --label name.minikube.sigs.k8s.io=old-k8s-version-162750 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:54:46.186824 1783997 oci.go:103] Successfully created a docker volume old-k8s-version-162750
	I1123 10:54:46.186944 1783997 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-162750-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-162750 --entrypoint /usr/bin/test -v old-k8s-version-162750:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:54:46.708368 1783997 oci.go:107] Successfully prepared a docker volume old-k8s-version-162750
	I1123 10:54:46.708439 1783997 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 10:54:46.708453 1783997 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:54:46.708536 1783997 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-162750:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:54:51.880252 1783997 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-162750:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.171646405s)
	I1123 10:54:51.880292 1783997 kic.go:203] duration metric: took 5.171834502s to extract preloaded images to volume ...
	W1123 10:54:51.880429 1783997 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:54:51.880567 1783997 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:54:51.937741 1783997 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-162750 --name old-k8s-version-162750 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-162750 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-162750 --network old-k8s-version-162750 --ip 192.168.85.2 --volume old-k8s-version-162750:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:54:52.249456 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Running}}
	I1123 10:54:52.270863 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:54:52.301164 1783997 cli_runner.go:164] Run: docker exec old-k8s-version-162750 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:54:52.361371 1783997 oci.go:144] the created container "old-k8s-version-162750" has a running status.
	I1123 10:54:52.361399 1783997 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa...
	I1123 10:54:53.193748 1783997 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:54:53.218957 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:54:53.240299 1783997 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:54:53.240318 1783997 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-162750 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:54:53.285222 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:54:53.318410 1783997 machine.go:94] provisionDockerMachine start ...
	I1123 10:54:53.318519 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:53.342415 1783997 main.go:143] libmachine: Using SSH client type: native
	I1123 10:54:53.342758 1783997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35254 <nil> <nil>}
	I1123 10:54:53.342768 1783997 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:54:53.506886 1783997 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-162750
	
	I1123 10:54:53.506951 1783997 ubuntu.go:182] provisioning hostname "old-k8s-version-162750"
	I1123 10:54:53.507054 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:53.526225 1783997 main.go:143] libmachine: Using SSH client type: native
	I1123 10:54:53.526575 1783997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35254 <nil> <nil>}
	I1123 10:54:53.526587 1783997 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-162750 && echo "old-k8s-version-162750" | sudo tee /etc/hostname
	I1123 10:54:53.698430 1783997 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-162750
	
	I1123 10:54:53.698512 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:53.716977 1783997 main.go:143] libmachine: Using SSH client type: native
	I1123 10:54:53.717284 1783997 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35254 <nil> <nil>}
	I1123 10:54:53.717301 1783997 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-162750' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-162750/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-162750' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:54:53.867695 1783997 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:54:53.867736 1783997 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-1582671/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-1582671/.minikube}
	I1123 10:54:53.867756 1783997 ubuntu.go:190] setting up certificates
	I1123 10:54:53.867764 1783997 provision.go:84] configureAuth start
	I1123 10:54:53.867825 1783997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-162750
	I1123 10:54:53.885136 1783997 provision.go:143] copyHostCerts
	I1123 10:54:53.885204 1783997 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem, removing ...
	I1123 10:54:53.885214 1783997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem
	I1123 10:54:53.885304 1783997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem (1078 bytes)
	I1123 10:54:53.885408 1783997 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem, removing ...
	I1123 10:54:53.885414 1783997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem
	I1123 10:54:53.885444 1783997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem (1123 bytes)
	I1123 10:54:53.885493 1783997 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem, removing ...
	I1123 10:54:53.885497 1783997 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem
	I1123 10:54:53.885520 1783997 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem (1675 bytes)
	I1123 10:54:53.885565 1783997 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-162750 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-162750]
	I1123 10:54:54.011017 1783997 provision.go:177] copyRemoteCerts
	I1123 10:54:54.011141 1783997 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:54:54.011241 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.029286 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.139377 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:54:54.158012 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 10:54:54.175734 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:54:54.193130 1783997 provision.go:87] duration metric: took 325.351961ms to configureAuth
	I1123 10:54:54.193154 1783997 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:54:54.193341 1783997 config.go:182] Loaded profile config "old-k8s-version-162750": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 10:54:54.193348 1783997 machine.go:97] duration metric: took 874.916799ms to provisionDockerMachine
	I1123 10:54:54.193354 1783997 client.go:176] duration metric: took 8.152740764s to LocalClient.Create
	I1123 10:54:54.193376 1783997 start.go:167] duration metric: took 8.152816487s to libmachine.API.Create "old-k8s-version-162750"
	I1123 10:54:54.193383 1783997 start.go:293] postStartSetup for "old-k8s-version-162750" (driver="docker")
	I1123 10:54:54.193391 1783997 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:54:54.193437 1783997 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:54:54.193482 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.210021 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.325275 1783997 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:54:54.329024 1783997 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:54:54.329049 1783997 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:54:54.329061 1783997 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/addons for local assets ...
	I1123 10:54:54.329120 1783997 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/files for local assets ...
	I1123 10:54:54.329201 1783997 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem -> 15845322.pem in /etc/ssl/certs
	I1123 10:54:54.329310 1783997 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:54:54.338040 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:54:54.358007 1783997 start.go:296] duration metric: took 164.608782ms for postStartSetup
	I1123 10:54:54.358447 1783997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-162750
	I1123 10:54:54.375573 1783997 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/config.json ...
	I1123 10:54:54.375883 1783997 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:54:54.375946 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.393024 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.496581 1783997 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:54:54.501414 1783997 start.go:128] duration metric: took 8.466335269s to createHost
	I1123 10:54:54.501446 1783997 start.go:83] releasing machines lock for "old-k8s-version-162750", held for 8.466480463s
	I1123 10:54:54.501515 1783997 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-162750
	I1123 10:54:54.520198 1783997 ssh_runner.go:195] Run: cat /version.json
	I1123 10:54:54.520243 1783997 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:54:54.520249 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.520311 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:54:54.551362 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.553162 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:54:54.747658 1783997 ssh_runner.go:195] Run: systemctl --version
	I1123 10:54:54.754098 1783997 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:54:54.761562 1783997 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:54:54.761662 1783997 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:54:54.788880 1783997 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:54:54.788906 1783997 start.go:496] detecting cgroup driver to use...
	I1123 10:54:54.788940 1783997 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:54:54.789013 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 10:54:54.803874 1783997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 10:54:54.816732 1783997 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:54:54.816825 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:54:54.834220 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:54:54.853351 1783997 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:54:54.974263 1783997 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:54:55.105120 1783997 docker.go:234] disabling docker service ...
	I1123 10:54:55.105245 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:54:55.130111 1783997 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:54:55.145070 1783997 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:54:55.267040 1783997 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:54:55.418898 1783997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:54:55.433109 1783997 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:54:55.450908 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1123 10:54:55.460521 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 10:54:55.470466 1783997 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 10:54:55.470581 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 10:54:55.480234 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:54:55.489346 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 10:54:55.498570 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:54:55.508154 1783997 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:54:55.516432 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 10:54:55.526011 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 10:54:55.540221 1783997 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 10:54:55.550647 1783997 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:54:55.558569 1783997 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:54:55.566399 1783997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:54:55.688971 1783997 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 10:54:55.823826 1783997 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 10:54:55.823941 1783997 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 10:54:55.827847 1783997 start.go:564] Will wait 60s for crictl version
	I1123 10:54:55.827950 1783997 ssh_runner.go:195] Run: which crictl
	I1123 10:54:55.831640 1783997 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:54:55.858457 1783997 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 10:54:55.858567 1783997 ssh_runner.go:195] Run: containerd --version
	I1123 10:54:55.881109 1783997 ssh_runner.go:195] Run: containerd --version
	I1123 10:54:55.905016 1783997 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1123 10:54:55.907979 1783997 cli_runner.go:164] Run: docker network inspect old-k8s-version-162750 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:54:55.924445 1783997 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:54:55.928343 1783997 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:54:55.938405 1783997 kubeadm.go:884] updating cluster {Name:old-k8s-version-162750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:54:55.938526 1783997 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 10:54:55.938599 1783997 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:54:55.964746 1783997 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:54:55.964769 1783997 containerd.go:534] Images already preloaded, skipping extraction
	I1123 10:54:55.964834 1783997 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:54:55.992799 1783997 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:54:55.992822 1783997 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:54:55.992831 1783997 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1123 10:54:55.992924 1783997 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-162750 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:54:55.992985 1783997 ssh_runner.go:195] Run: sudo crictl info
	I1123 10:54:56.022425 1783997 cni.go:84] Creating CNI manager for ""
	I1123 10:54:56.022448 1783997 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:54:56.022463 1783997 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:54:56.022486 1783997 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-162750 NodeName:old-k8s-version-162750 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:54:56.022643 1783997 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-162750"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:54:56.022714 1783997 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 10:54:56.031515 1783997 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:54:56.031590 1783997 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:54:56.040467 1783997 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1123 10:54:56.054737 1783997 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:54:56.069711 1783997 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1123 10:54:56.085335 1783997 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:54:56.089252 1783997 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:54:56.099679 1783997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:54:56.231828 1783997 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:54:56.250164 1783997 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750 for IP: 192.168.85.2
	I1123 10:54:56.250236 1783997 certs.go:195] generating shared ca certs ...
	I1123 10:54:56.250266 1783997 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.250450 1783997 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 10:54:56.250524 1783997 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 10:54:56.250557 1783997 certs.go:257] generating profile certs ...
	I1123 10:54:56.250632 1783997 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.key
	I1123 10:54:56.250670 1783997 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt with IP's: []
	I1123 10:54:56.568517 1783997 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt ...
	I1123 10:54:56.568551 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: {Name:mk160e046d920c647b09293b52a55655d4f79645 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.568755 1783997 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.key ...
	I1123 10:54:56.568771 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.key: {Name:mkcf59a92983eb562e64ef836dbe11b8eebc9090 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.568869 1783997 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key.9a4a4bf9
	I1123 10:54:56.568889 1783997 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt.9a4a4bf9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 10:54:56.858965 1783997 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt.9a4a4bf9 ...
	I1123 10:54:56.858994 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt.9a4a4bf9: {Name:mk074c30cec604803cd4dceea20cabf9824439f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.859197 1783997 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key.9a4a4bf9 ...
	I1123 10:54:56.859210 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key.9a4a4bf9: {Name:mkf618c3e44d5bab75985d611632bd8af39340de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:56.859310 1783997 certs.go:382] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt.9a4a4bf9 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt
	I1123 10:54:56.859391 1783997 certs.go:386] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key.9a4a4bf9 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key
	I1123 10:54:56.859449 1783997 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.key
	I1123 10:54:56.859466 1783997 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.crt with IP's: []
	I1123 10:54:57.068178 1783997 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.crt ...
	I1123 10:54:57.068207 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.crt: {Name:mk2e7a4c936e1d1dac560fcaa9cb1621ab7cb5b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:57.068389 1783997 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.key ...
	I1123 10:54:57.068402 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.key: {Name:mkb3bf3285704f3eef03f7d9bab92686c229ead8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:54:57.068595 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 10:54:57.068641 1783997 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 10:54:57.068651 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:54:57.068677 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:54:57.068704 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:54:57.068730 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 10:54:57.068799 1783997 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:54:57.069357 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:54:57.089272 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:54:57.107948 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:54:57.128561 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:54:57.153366 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 10:54:57.171471 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:54:57.189606 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:54:57.211915 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 10:54:57.232653 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:54:57.250846 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 10:54:57.269192 1783997 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 10:54:57.287405 1783997 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:54:57.301602 1783997 ssh_runner.go:195] Run: openssl version
	I1123 10:54:57.307994 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 10:54:57.317490 1783997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 10:54:57.323882 1783997 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 10:54:57.323949 1783997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 10:54:57.365456 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 10:54:57.373849 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 10:54:57.382387 1783997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 10:54:57.386098 1783997 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 10:54:57.386204 1783997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 10:54:57.427012 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:54:57.435584 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:54:57.444131 1783997 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:54:57.447981 1783997 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:54:57.448074 1783997 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:54:57.489133 1783997 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:54:57.497721 1783997 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:54:57.501803 1783997 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:54:57.501856 1783997 kubeadm.go:401] StartCluster: {Name:old-k8s-version-162750 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-162750 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:54:57.501934 1783997 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 10:54:57.502002 1783997 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:54:57.530473 1783997 cri.go:89] found id: ""
	I1123 10:54:57.530624 1783997 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:54:57.538954 1783997 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:54:57.546859 1783997 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:54:57.546928 1783997 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:54:57.555016 1783997 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:54:57.555084 1783997 kubeadm.go:158] found existing configuration files:
	
	I1123 10:54:57.555156 1783997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:54:57.563434 1783997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:54:57.563545 1783997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:54:57.571332 1783997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:54:57.579168 1783997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:54:57.579259 1783997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:54:57.586662 1783997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:54:57.594768 1783997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:54:57.594868 1783997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:54:57.602273 1783997 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:54:57.610319 1783997 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:54:57.610390 1783997 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:54:57.617969 1783997 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:54:57.662566 1783997 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1123 10:54:57.662726 1783997 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:54:57.700306 1783997 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:54:57.700427 1783997 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:54:57.700488 1783997 kubeadm.go:319] OS: Linux
	I1123 10:54:57.700554 1783997 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:54:57.700629 1783997 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:54:57.700698 1783997 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:54:57.700792 1783997 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:54:57.700861 1783997 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:54:57.700946 1783997 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:54:57.701011 1783997 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:54:57.701089 1783997 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:54:57.701156 1783997 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:54:57.785554 1783997 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:54:57.785678 1783997 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:54:57.785792 1783997 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1123 10:54:57.967860 1783997 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:54:57.974440 1783997 out.go:252]   - Generating certificates and keys ...
	I1123 10:54:57.974603 1783997 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:54:57.974714 1783997 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:54:58.465232 1783997 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:54:58.683296 1783997 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:54:59.280296 1783997 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:54:59.791233 1783997 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:55:00.165766 1783997 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:55:00.165918 1783997 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-162750] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:55:01.066202 1783997 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:55:01.066469 1783997 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-162750] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:55:01.561871 1783997 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:55:02.017621 1783997 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:55:02.474583 1783997 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:55:02.475152 1783997 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:55:02.836024 1783997 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:55:03.718945 1783997 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:55:04.189939 1783997 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:55:04.931734 1783997 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:55:04.932956 1783997 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:55:04.936030 1783997 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:55:04.939746 1783997 out.go:252]   - Booting up control plane ...
	I1123 10:55:04.939857 1783997 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:55:04.939962 1783997 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:55:04.941019 1783997 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:55:04.961212 1783997 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:55:04.961315 1783997 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:55:04.961359 1783997 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:55:05.101616 1783997 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 10:55:14.605046 1783997 kubeadm.go:319] [apiclient] All control plane components are healthy after 9.503837 seconds
	I1123 10:55:14.605170 1783997 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:55:14.625306 1783997 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:55:15.168835 1783997 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:55:15.169046 1783997 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-162750 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:55:15.681392 1783997 kubeadm.go:319] [bootstrap-token] Using token: b6ms7a.52dw6vj4aucktnza
	I1123 10:55:15.684473 1783997 out.go:252]   - Configuring RBAC rules ...
	I1123 10:55:15.684620 1783997 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:55:15.691075 1783997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:55:15.700139 1783997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:55:15.714561 1783997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:55:15.722689 1783997 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:55:15.729225 1783997 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:55:15.752397 1783997 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:55:16.155418 1783997 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:55:16.212380 1783997 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:55:16.214126 1783997 kubeadm.go:319] 
	I1123 10:55:16.214205 1783997 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:55:16.214216 1783997 kubeadm.go:319] 
	I1123 10:55:16.214293 1783997 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:55:16.214302 1783997 kubeadm.go:319] 
	I1123 10:55:16.214342 1783997 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:55:16.214416 1783997 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:55:16.214474 1783997 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:55:16.214483 1783997 kubeadm.go:319] 
	I1123 10:55:16.214537 1783997 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:55:16.214543 1783997 kubeadm.go:319] 
	I1123 10:55:16.214592 1783997 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:55:16.214600 1783997 kubeadm.go:319] 
	I1123 10:55:16.214652 1783997 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:55:16.214731 1783997 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:55:16.214810 1783997 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:55:16.214817 1783997 kubeadm.go:319] 
	I1123 10:55:16.214901 1783997 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:55:16.214982 1783997 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:55:16.214988 1783997 kubeadm.go:319] 
	I1123 10:55:16.215072 1783997 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token b6ms7a.52dw6vj4aucktnza \
	I1123 10:55:16.215244 1783997 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 \
	I1123 10:55:16.215274 1783997 kubeadm.go:319] 	--control-plane 
	I1123 10:55:16.215280 1783997 kubeadm.go:319] 
	I1123 10:55:16.215371 1783997 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:55:16.215379 1783997 kubeadm.go:319] 
	I1123 10:55:16.215461 1783997 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token b6ms7a.52dw6vj4aucktnza \
	I1123 10:55:16.215567 1783997 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 
	I1123 10:55:16.220081 1783997 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:55:16.220204 1783997 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:55:16.220225 1783997 cni.go:84] Creating CNI manager for ""
	I1123 10:55:16.220237 1783997 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:55:16.223476 1783997 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:55:16.226383 1783997 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:55:16.231956 1783997 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 10:55:16.231984 1783997 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:55:16.261607 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:55:17.575403 1783997 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.313763507s)
	I1123 10:55:17.575444 1783997 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:55:17.575566 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:17.575660 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-162750 minikube.k8s.io/updated_at=2025_11_23T10_55_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=old-k8s-version-162750 minikube.k8s.io/primary=true
	I1123 10:55:17.773918 1783997 ops.go:34] apiserver oom_adj: -16
	I1123 10:55:17.774078 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:18.274586 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:18.774782 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:19.274628 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:19.774305 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:20.274200 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:20.774754 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:21.274587 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:21.774695 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:22.274692 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:22.774493 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:23.274487 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:23.774994 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:24.274216 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:24.774862 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:25.274883 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:25.775167 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:26.274163 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:26.774629 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:27.274395 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:27.774170 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:28.274787 1783997 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:55:28.453896 1783997 kubeadm.go:1114] duration metric: took 10.878380592s to wait for elevateKubeSystemPrivileges
	I1123 10:55:28.453924 1783997 kubeadm.go:403] duration metric: took 30.952072121s to StartCluster
	I1123 10:55:28.453940 1783997 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:55:28.453999 1783997 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:55:28.454990 1783997 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:55:28.455237 1783997 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:55:28.455398 1783997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:55:28.455651 1783997 config.go:182] Loaded profile config "old-k8s-version-162750": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 10:55:28.455688 1783997 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:55:28.455746 1783997 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-162750"
	I1123 10:55:28.455759 1783997 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-162750"
	I1123 10:55:28.455778 1783997 host.go:66] Checking if "old-k8s-version-162750" exists ...
	I1123 10:55:28.456266 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:55:28.456747 1783997 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-162750"
	I1123 10:55:28.456764 1783997 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-162750"
	I1123 10:55:28.457044 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:55:28.461189 1783997 out.go:179] * Verifying Kubernetes components...
	I1123 10:55:28.464749 1783997 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:55:28.496981 1783997 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:55:28.499578 1783997 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-162750"
	I1123 10:55:28.499615 1783997 host.go:66] Checking if "old-k8s-version-162750" exists ...
	I1123 10:55:28.501843 1783997 cli_runner.go:164] Run: docker container inspect old-k8s-version-162750 --format={{.State.Status}}
	I1123 10:55:28.502111 1783997 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:55:28.502126 1783997 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:55:28.502165 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:55:28.543332 1783997 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:55:28.543355 1783997 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:55:28.543416 1783997 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-162750
	I1123 10:55:28.545228 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:55:28.576806 1783997 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35254 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/old-k8s-version-162750/id_rsa Username:docker}
	I1123 10:55:28.784725 1783997 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:55:28.809554 1783997 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:55:28.815162 1783997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:55:28.824290 1783997 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:55:29.662463 1783997 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:55:29.664108 1783997 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-162750" to be "Ready" ...
	I1123 10:55:30.126374 1783997 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.302009167s)
	I1123 10:55:30.129697 1783997 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 10:55:30.132617 1783997 addons.go:530] duration metric: took 1.676919337s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 10:55:30.167772 1783997 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-162750" context rescaled to 1 replicas
	W1123 10:55:31.668910 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	W1123 10:55:34.167440 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	W1123 10:55:36.167739 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	W1123 10:55:38.668076 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	W1123 10:55:41.168316 1783997 node_ready.go:57] node "old-k8s-version-162750" has "Ready":"False" status (will retry)
	I1123 10:55:42.169908 1783997 node_ready.go:49] node "old-k8s-version-162750" is "Ready"
	I1123 10:55:42.169951 1783997 node_ready.go:38] duration metric: took 12.505541365s for node "old-k8s-version-162750" to be "Ready" ...
	I1123 10:55:42.169971 1783997 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:55:42.170085 1783997 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:55:42.202531 1783997 api_server.go:72] duration metric: took 13.747259624s to wait for apiserver process to appear ...
	I1123 10:55:42.202563 1783997 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:55:42.202588 1783997 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:55:42.218720 1783997 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:55:42.220782 1783997 api_server.go:141] control plane version: v1.28.0
	I1123 10:55:42.220908 1783997 api_server.go:131] duration metric: took 18.335434ms to wait for apiserver health ...
	I1123 10:55:42.220947 1783997 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:55:42.227364 1783997 system_pods.go:59] 8 kube-system pods found
	I1123 10:55:42.227483 1783997 system_pods.go:61] "coredns-5dd5756b68-cxm6d" [5bb94c83-477d-49aa-9ade-b2404e214905] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:55:42.227507 1783997 system_pods.go:61] "etcd-old-k8s-version-162750" [aced8707-2bd1-4ee1-98fe-311294917440] Running
	I1123 10:55:42.227551 1783997 system_pods.go:61] "kindnet-zb6c5" [2801a8b0-1e0a-4426-be8c-07fd89dff52f] Running
	I1123 10:55:42.227577 1783997 system_pods.go:61] "kube-apiserver-old-k8s-version-162750" [1350c454-aabc-4ecc-b8c0-230c87e88fb5] Running
	I1123 10:55:42.227599 1783997 system_pods.go:61] "kube-controller-manager-old-k8s-version-162750" [67fd41b0-4c9f-4e0a-93f8-d3d298a13ce6] Running
	I1123 10:55:42.227636 1783997 system_pods.go:61] "kube-proxy-79b2j" [e6211a98-e130-4ef0-b3b4-25ab09219fd4] Running
	I1123 10:55:42.227661 1783997 system_pods.go:61] "kube-scheduler-old-k8s-version-162750" [0559d534-1bf5-49e8-871e-48080b9375ee] Running
	I1123 10:55:42.227685 1783997 system_pods.go:61] "storage-provisioner" [3c9ddafc-e744-4085-ab89-dace2cd10a03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:55:42.227724 1783997 system_pods.go:74] duration metric: took 6.734708ms to wait for pod list to return data ...
	I1123 10:55:42.227754 1783997 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:55:42.233192 1783997 default_sa.go:45] found service account: "default"
	I1123 10:55:42.233282 1783997 default_sa.go:55] duration metric: took 5.507083ms for default service account to be created ...
	I1123 10:55:42.233310 1783997 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:55:42.241449 1783997 system_pods.go:86] 8 kube-system pods found
	I1123 10:55:42.241556 1783997 system_pods.go:89] "coredns-5dd5756b68-cxm6d" [5bb94c83-477d-49aa-9ade-b2404e214905] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:55:42.241581 1783997 system_pods.go:89] "etcd-old-k8s-version-162750" [aced8707-2bd1-4ee1-98fe-311294917440] Running
	I1123 10:55:42.241619 1783997 system_pods.go:89] "kindnet-zb6c5" [2801a8b0-1e0a-4426-be8c-07fd89dff52f] Running
	I1123 10:55:42.241647 1783997 system_pods.go:89] "kube-apiserver-old-k8s-version-162750" [1350c454-aabc-4ecc-b8c0-230c87e88fb5] Running
	I1123 10:55:42.241670 1783997 system_pods.go:89] "kube-controller-manager-old-k8s-version-162750" [67fd41b0-4c9f-4e0a-93f8-d3d298a13ce6] Running
	I1123 10:55:42.241704 1783997 system_pods.go:89] "kube-proxy-79b2j" [e6211a98-e130-4ef0-b3b4-25ab09219fd4] Running
	I1123 10:55:42.241730 1783997 system_pods.go:89] "kube-scheduler-old-k8s-version-162750" [0559d534-1bf5-49e8-871e-48080b9375ee] Running
	I1123 10:55:42.241754 1783997 system_pods.go:89] "storage-provisioner" [3c9ddafc-e744-4085-ab89-dace2cd10a03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:55:42.241813 1783997 retry.go:31] will retry after 211.209028ms: missing components: kube-dns
	I1123 10:55:42.467715 1783997 system_pods.go:86] 8 kube-system pods found
	I1123 10:55:42.467795 1783997 system_pods.go:89] "coredns-5dd5756b68-cxm6d" [5bb94c83-477d-49aa-9ade-b2404e214905] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:55:42.467816 1783997 system_pods.go:89] "etcd-old-k8s-version-162750" [aced8707-2bd1-4ee1-98fe-311294917440] Running
	I1123 10:55:42.467872 1783997 system_pods.go:89] "kindnet-zb6c5" [2801a8b0-1e0a-4426-be8c-07fd89dff52f] Running
	I1123 10:55:42.467895 1783997 system_pods.go:89] "kube-apiserver-old-k8s-version-162750" [1350c454-aabc-4ecc-b8c0-230c87e88fb5] Running
	I1123 10:55:42.467914 1783997 system_pods.go:89] "kube-controller-manager-old-k8s-version-162750" [67fd41b0-4c9f-4e0a-93f8-d3d298a13ce6] Running
	I1123 10:55:42.467934 1783997 system_pods.go:89] "kube-proxy-79b2j" [e6211a98-e130-4ef0-b3b4-25ab09219fd4] Running
	I1123 10:55:42.467975 1783997 system_pods.go:89] "kube-scheduler-old-k8s-version-162750" [0559d534-1bf5-49e8-871e-48080b9375ee] Running
	I1123 10:55:42.468004 1783997 system_pods.go:89] "storage-provisioner" [3c9ddafc-e744-4085-ab89-dace2cd10a03] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:55:42.468048 1783997 retry.go:31] will retry after 353.778957ms: missing components: kube-dns
	I1123 10:55:42.826445 1783997 system_pods.go:86] 8 kube-system pods found
	I1123 10:55:42.826475 1783997 system_pods.go:89] "coredns-5dd5756b68-cxm6d" [5bb94c83-477d-49aa-9ade-b2404e214905] Running
	I1123 10:55:42.826482 1783997 system_pods.go:89] "etcd-old-k8s-version-162750" [aced8707-2bd1-4ee1-98fe-311294917440] Running
	I1123 10:55:42.826486 1783997 system_pods.go:89] "kindnet-zb6c5" [2801a8b0-1e0a-4426-be8c-07fd89dff52f] Running
	I1123 10:55:42.826518 1783997 system_pods.go:89] "kube-apiserver-old-k8s-version-162750" [1350c454-aabc-4ecc-b8c0-230c87e88fb5] Running
	I1123 10:55:42.826531 1783997 system_pods.go:89] "kube-controller-manager-old-k8s-version-162750" [67fd41b0-4c9f-4e0a-93f8-d3d298a13ce6] Running
	I1123 10:55:42.826535 1783997 system_pods.go:89] "kube-proxy-79b2j" [e6211a98-e130-4ef0-b3b4-25ab09219fd4] Running
	I1123 10:55:42.826539 1783997 system_pods.go:89] "kube-scheduler-old-k8s-version-162750" [0559d534-1bf5-49e8-871e-48080b9375ee] Running
	I1123 10:55:42.826571 1783997 system_pods.go:89] "storage-provisioner" [3c9ddafc-e744-4085-ab89-dace2cd10a03] Running
	I1123 10:55:42.826608 1783997 system_pods.go:126] duration metric: took 593.269633ms to wait for k8s-apps to be running ...
	I1123 10:55:42.826622 1783997 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:55:42.826696 1783997 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:55:42.839784 1783997 system_svc.go:56] duration metric: took 13.15374ms WaitForService to wait for kubelet
	I1123 10:55:42.839811 1783997 kubeadm.go:587] duration metric: took 14.384548279s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:55:42.839830 1783997 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:55:42.842751 1783997 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:55:42.842784 1783997 node_conditions.go:123] node cpu capacity is 2
	I1123 10:55:42.842798 1783997 node_conditions.go:105] duration metric: took 2.942496ms to run NodePressure ...
	I1123 10:55:42.842827 1783997 start.go:242] waiting for startup goroutines ...
	I1123 10:55:42.842841 1783997 start.go:247] waiting for cluster config update ...
	I1123 10:55:42.842864 1783997 start.go:256] writing updated cluster config ...
	I1123 10:55:42.843149 1783997 ssh_runner.go:195] Run: rm -f paused
	I1123 10:55:42.846757 1783997 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:55:42.851146 1783997 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-cxm6d" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.856441 1783997 pod_ready.go:94] pod "coredns-5dd5756b68-cxm6d" is "Ready"
	I1123 10:55:42.856475 1783997 pod_ready.go:86] duration metric: took 5.30507ms for pod "coredns-5dd5756b68-cxm6d" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.859545 1783997 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.864209 1783997 pod_ready.go:94] pod "etcd-old-k8s-version-162750" is "Ready"
	I1123 10:55:42.864234 1783997 pod_ready.go:86] duration metric: took 4.663875ms for pod "etcd-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.867129 1783997 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.872330 1783997 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-162750" is "Ready"
	I1123 10:55:42.872353 1783997 pod_ready.go:86] duration metric: took 5.198997ms for pod "kube-apiserver-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:42.875324 1783997 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:43.251085 1783997 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-162750" is "Ready"
	I1123 10:55:43.251127 1783997 pod_ready.go:86] duration metric: took 375.783039ms for pod "kube-controller-manager-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:43.452047 1783997 pod_ready.go:83] waiting for pod "kube-proxy-79b2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:43.851383 1783997 pod_ready.go:94] pod "kube-proxy-79b2j" is "Ready"
	I1123 10:55:43.851463 1783997 pod_ready.go:86] duration metric: took 399.390198ms for pod "kube-proxy-79b2j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:44.051690 1783997 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:44.450999 1783997 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-162750" is "Ready"
	I1123 10:55:44.451024 1783997 pod_ready.go:86] duration metric: took 399.305853ms for pod "kube-scheduler-old-k8s-version-162750" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:55:44.451038 1783997 pod_ready.go:40] duration metric: took 1.604247069s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:55:44.520151 1783997 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 10:55:44.523300 1783997 out.go:203] 
	W1123 10:55:44.526225 1783997 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 10:55:44.529256 1783997 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 10:55:44.532207 1783997 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-162750" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	ba06948ce0b13       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   7b6f4396c2ac4       busybox                                          default
	5a70cbe1d834b       ba04bb24b9575       14 seconds ago      Running             storage-provisioner       0                   a70aca4724ab5       storage-provisioner                              kube-system
	bc5fe6488aa4a       97e04611ad434       14 seconds ago      Running             coredns                   0                   4adbe96b4b541       coredns-5dd5756b68-cxm6d                         kube-system
	9b339c5b61e20       b1a8c6f707935       25 seconds ago      Running             kindnet-cni               0                   286a1b0362bc9       kindnet-zb6c5                                    kube-system
	5edae851933e1       940f54a5bcae9       27 seconds ago      Running             kube-proxy                0                   f64a9d6491bd6       kube-proxy-79b2j                                 kube-system
	86e0ec376568c       9cdd6470f48c8       49 seconds ago      Running             etcd                      0                   6326fefcfce67       etcd-old-k8s-version-162750                      kube-system
	2c46626cae965       00543d2fe5d71       49 seconds ago      Running             kube-apiserver            0                   86c41d194d43d       kube-apiserver-old-k8s-version-162750            kube-system
	b172ecbbc92ae       46cc66ccc7c19       49 seconds ago      Running             kube-controller-manager   0                   2e69a304f2302       kube-controller-manager-old-k8s-version-162750   kube-system
	8fe87db64994a       762dce4090c5f       49 seconds ago      Running             kube-scheduler            0                   47d58bf656f5a       kube-scheduler-old-k8s-version-162750            kube-system
	
	
	==> containerd <==
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.305269997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:3c9ddafc-e744-4085-ab89-dace2cd10a03,Namespace:kube-system,Attempt:0,} returns sandbox id \"a70aca4724ab5c57d7e45ea572a79a2152282f9a26c6c2ae65163ab53bc60287\""
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.322518964Z" level=info msg="CreateContainer within sandbox \"a70aca4724ab5c57d7e45ea572a79a2152282f9a26c6c2ae65163ab53bc60287\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.343620324Z" level=info msg="Container 5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.355157815Z" level=info msg="CreateContainer within sandbox \"a70aca4724ab5c57d7e45ea572a79a2152282f9a26c6c2ae65163ab53bc60287\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77\""
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.358862686Z" level=info msg="StartContainer for \"5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77\""
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.362313229Z" level=info msg="connecting to shim 5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77" address="unix:///run/containerd/s/45d05dd539374b013fce28de2d923c29a1eb650ec6ca7883ae2b1c38a9c10251" protocol=ttrpc version=3
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.427452007Z" level=info msg="StartContainer for \"bc5fe6488aa4a4f759401ceb92d86b05c69c3c6e8091bc15dcae014de37ad281\" returns successfully"
	Nov 23 10:55:42 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:42.452264178Z" level=info msg="StartContainer for \"5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77\" returns successfully"
	Nov 23 10:55:45 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:45.057875535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2dd7549c-5bf6-4864-9a27-188c6854aedd,Namespace:default,Attempt:0,}"
	Nov 23 10:55:45 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:45.136262770Z" level=info msg="connecting to shim 7b6f4396c2ac4e649170e7f117eabde64ed25ed0f5d6e9fb9a05c0d5cfb58f48" address="unix:///run/containerd/s/e1101a7b12b9df4e8726eedc27453ba7cd17794c35a04be3e7f281e3b70ad2a6" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 10:55:45 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:45.247281112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2dd7549c-5bf6-4864-9a27-188c6854aedd,Namespace:default,Attempt:0,} returns sandbox id \"7b6f4396c2ac4e649170e7f117eabde64ed25ed0f5d6e9fb9a05c0d5cfb58f48\""
	Nov 23 10:55:45 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:45.251369979Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.455900600Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.460607190Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.462553398Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.466379668Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.467037569Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.215402302s"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.467248320Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.469006842Z" level=info msg="CreateContainer within sandbox \"7b6f4396c2ac4e649170e7f117eabde64ed25ed0f5d6e9fb9a05c0d5cfb58f48\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.482365147Z" level=info msg="Container ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.492750868Z" level=info msg="CreateContainer within sandbox \"7b6f4396c2ac4e649170e7f117eabde64ed25ed0f5d6e9fb9a05c0d5cfb58f48\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0\""
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.493687604Z" level=info msg="StartContainer for \"ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0\""
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.494806735Z" level=info msg="connecting to shim ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0" address="unix:///run/containerd/s/e1101a7b12b9df4e8726eedc27453ba7cd17794c35a04be3e7f281e3b70ad2a6" protocol=ttrpc version=3
	Nov 23 10:55:47 old-k8s-version-162750 containerd[760]: time="2025-11-23T10:55:47.568067430Z" level=info msg="StartContainer for \"ba06948ce0b1317deec10511f50ab4018f7d18a9085725df5cdd748d106034d0\" returns successfully"
	Nov 23 10:55:53 old-k8s-version-162750 containerd[760]: E1123 10:55:53.878172     760 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [bc5fe6488aa4a4f759401ceb92d86b05c69c3c6e8091bc15dcae014de37ad281] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:48632 - 24416 "HINFO IN 3024842233634058345.7861179067670114405. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.057365856s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-162750
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-162750
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=old-k8s-version-162750
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_55_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:55:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-162750
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:55:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:55:47 +0000   Sun, 23 Nov 2025 10:55:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:55:47 +0000   Sun, 23 Nov 2025 10:55:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:55:47 +0000   Sun, 23 Nov 2025 10:55:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:55:47 +0000   Sun, 23 Nov 2025 10:55:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-162750
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                bfae4691-d726-46e0-afa3-b816e5402bb4
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-cxm6d                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     29s
	  kube-system                 etcd-old-k8s-version-162750                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-zb6c5                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      29s
	  kube-system                 kube-apiserver-old-k8s-version-162750             250m (12%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-162750    200m (10%)    0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-79b2j                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-scheduler-old-k8s-version-162750             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 41s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s   kubelet          Node old-k8s-version-162750 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s   kubelet          Node old-k8s-version-162750 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s   kubelet          Node old-k8s-version-162750 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           30s   node-controller  Node old-k8s-version-162750 event: Registered Node old-k8s-version-162750 in Controller
	  Normal  NodeReady                16s   kubelet          Node old-k8s-version-162750 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:50] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [86e0ec376568c81aa8ee0cf9c45f122bfb574eb7aaf9980bcd17e7f6a947b65d] <==
	{"level":"info","ts":"2025-11-23T10:55:08.414356Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T10:55:08.414618Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T10:55:08.431851Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T10:55:08.431918Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T10:55:08.432071Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T10:55:08.435999Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T10:55:08.436237Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T10:55:08.459276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T10:55:08.459497Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T10:55:08.459611Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-23T10:55:08.459739Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T10:55:08.45982Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T10:55:08.459907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-23T10:55:08.459994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T10:55:08.461288Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:55:08.471141Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-162750 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T10:55:08.471348Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:55:08.472539Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T10:55:08.472912Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T10:55:08.473132Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T10:55:08.473239Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T10:55:08.474392Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-23T10:55:08.475232Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:55:08.475536Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T10:55:08.487789Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 10:55:57 up 11:38,  0 user,  load average: 3.20, 3.56, 2.94
	Linux old-k8s-version-162750 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9b339c5b61e202c45c3fdc95ba0af753644dca4aef308dc5f15111615f78f8af] <==
	I1123 10:55:31.429904       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:55:31.431758       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:55:31.431908       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:55:31.431920       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:55:31.432460       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:55:31Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:55:31.632322       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:55:31.632353       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:55:31.632364       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:55:31.633553       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:55:31.832439       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:55:31.832469       1 metrics.go:72] Registering metrics
	I1123 10:55:31.832663       1 controller.go:711] "Syncing nftables rules"
	I1123 10:55:41.640548       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:55:41.640815       1 main.go:301] handling current node
	I1123 10:55:51.632089       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:55:51.632119       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2c46626cae965cc9b9ae2b696041e30b151618e4aee79c75568edaa726197a16] <==
	I1123 10:55:12.878120       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1123 10:55:12.878218       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 10:55:12.878522       1 aggregator.go:166] initial CRD sync complete...
	I1123 10:55:12.878662       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 10:55:12.878756       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 10:55:12.878858       1 cache.go:39] Caches are synced for autoregister controller
	I1123 10:55:12.879423       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 10:55:12.879632       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 10:55:12.882027       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 10:55:13.080943       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:55:13.487590       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:55:13.492827       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:55:13.492850       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:55:14.147673       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:55:14.199257       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:55:14.319375       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:55:14.330386       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 10:55:14.331544       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 10:55:14.338132       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:55:14.657050       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 10:55:16.129805       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 10:55:16.154132       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:55:16.165483       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 10:55:27.550634       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 10:55:28.249408       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [b172ecbbc92ae00ae0576b5c91f5300dd1600c68cedc8d7906fefff317c2b2ad] <==
	I1123 10:55:27.569285       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 10:55:27.575276       1 shared_informer.go:318] Caches are synced for attach detach
	I1123 10:55:27.588832       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1123 10:55:27.638851       1 shared_informer.go:318] Caches are synced for ClusterRoleAggregator
	I1123 10:55:28.054052       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:55:28.084968       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 10:55:28.085001       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 10:55:28.263305       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-zb6c5"
	I1123 10:55:28.266556       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-79b2j"
	I1123 10:55:28.407913       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-bxh4c"
	I1123 10:55:28.428739       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-cxm6d"
	I1123 10:55:28.448302       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="892.411145ms"
	I1123 10:55:28.513752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.199822ms"
	I1123 10:55:28.513850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.506µs"
	I1123 10:55:29.726843       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 10:55:29.762403       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-bxh4c"
	I1123 10:55:29.773596       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="47.578157ms"
	I1123 10:55:29.792150       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.503716ms"
	I1123 10:55:29.793161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="41.295µs"
	I1123 10:55:41.726998       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="118.553µs"
	I1123 10:55:41.744627       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.644µs"
	I1123 10:55:42.457536       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 10:55:42.603288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="133.715µs"
	I1123 10:55:42.660947       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.283125ms"
	I1123 10:55:42.661290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="63.252µs"
	
	
	==> kube-proxy [5edae851933e164429106b42e9db4cd11398e2cef913bcace0a893ce92ae2a64] <==
	I1123 10:55:29.389141       1 server_others.go:69] "Using iptables proxy"
	I1123 10:55:29.405946       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 10:55:29.507964       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:55:29.509971       1 server_others.go:152] "Using iptables Proxier"
	I1123 10:55:29.510014       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 10:55:29.510087       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 10:55:29.510129       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 10:55:29.510338       1 server.go:846] "Version info" version="v1.28.0"
	I1123 10:55:29.510355       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:55:29.512734       1 config.go:188] "Starting service config controller"
	I1123 10:55:29.512760       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 10:55:29.512781       1 config.go:97] "Starting endpoint slice config controller"
	I1123 10:55:29.512786       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 10:55:29.519667       1 config.go:315] "Starting node config controller"
	I1123 10:55:29.519694       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 10:55:29.613894       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 10:55:29.613956       1 shared_informer.go:318] Caches are synced for service config
	I1123 10:55:29.620220       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [8fe87db64994a9a27004272a4bd3ef17202de2d385ebec44e1f454df96ff18cc] <==
	W1123 10:55:13.046294       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 10:55:13.046318       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:55:13.046380       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 10:55:13.046398       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 10:55:13.046455       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 10:55:13.046471       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 10:55:13.046520       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 10:55:13.046535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 10:55:13.046644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 10:55:13.046660       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 10:55:13.046716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 10:55:13.046735       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 10:55:13.046810       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 10:55:13.046825       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 10:55:13.046875       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 10:55:13.046889       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 10:55:13.046944       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 10:55:13.046959       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 10:55:13.047021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 10:55:13.047037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1123 10:55:13.865873       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1123 10:55:13.866133       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1123 10:55:13.914014       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 10:55:13.914052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I1123 10:55:14.635417       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 10:55:27 old-k8s-version-162750 kubelet[1545]: I1123 10:55:27.529028    1545 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 10:55:27 old-k8s-version-162750 kubelet[1545]: I1123 10:55:27.529618    1545 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.271914    1545 topology_manager.go:215] "Topology Admit Handler" podUID="2801a8b0-1e0a-4426-be8c-07fd89dff52f" podNamespace="kube-system" podName="kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.302314    1545 topology_manager.go:215] "Topology Admit Handler" podUID="e6211a98-e130-4ef0-b3b4-25ab09219fd4" podNamespace="kube-system" podName="kube-proxy-79b2j"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399504    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6211a98-e130-4ef0-b3b4-25ab09219fd4-xtables-lock\") pod \"kube-proxy-79b2j\" (UID: \"e6211a98-e130-4ef0-b3b4-25ab09219fd4\") " pod="kube-system/kube-proxy-79b2j"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399566    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6211a98-e130-4ef0-b3b4-25ab09219fd4-lib-modules\") pod \"kube-proxy-79b2j\" (UID: \"e6211a98-e130-4ef0-b3b4-25ab09219fd4\") " pod="kube-system/kube-proxy-79b2j"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399601    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e6211a98-e130-4ef0-b3b4-25ab09219fd4-kube-proxy\") pod \"kube-proxy-79b2j\" (UID: \"e6211a98-e130-4ef0-b3b4-25ab09219fd4\") " pod="kube-system/kube-proxy-79b2j"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399647    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2801a8b0-1e0a-4426-be8c-07fd89dff52f-cni-cfg\") pod \"kindnet-zb6c5\" (UID: \"2801a8b0-1e0a-4426-be8c-07fd89dff52f\") " pod="kube-system/kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399674    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2801a8b0-1e0a-4426-be8c-07fd89dff52f-xtables-lock\") pod \"kindnet-zb6c5\" (UID: \"2801a8b0-1e0a-4426-be8c-07fd89dff52f\") " pod="kube-system/kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399699    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2801a8b0-1e0a-4426-be8c-07fd89dff52f-lib-modules\") pod \"kindnet-zb6c5\" (UID: \"2801a8b0-1e0a-4426-be8c-07fd89dff52f\") " pod="kube-system/kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399739    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxpns\" (UniqueName: \"kubernetes.io/projected/2801a8b0-1e0a-4426-be8c-07fd89dff52f-kube-api-access-sxpns\") pod \"kindnet-zb6c5\" (UID: \"2801a8b0-1e0a-4426-be8c-07fd89dff52f\") " pod="kube-system/kindnet-zb6c5"
	Nov 23 10:55:28 old-k8s-version-162750 kubelet[1545]: I1123 10:55:28.399767    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k45dk\" (UniqueName: \"kubernetes.io/projected/e6211a98-e130-4ef0-b3b4-25ab09219fd4-kube-api-access-k45dk\") pod \"kube-proxy-79b2j\" (UID: \"e6211a98-e130-4ef0-b3b4-25ab09219fd4\") " pod="kube-system/kube-proxy-79b2j"
	Nov 23 10:55:31 old-k8s-version-162750 kubelet[1545]: I1123 10:55:31.579745    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-79b2j" podStartSLOduration=3.579701386 podCreationTimestamp="2025-11-23 10:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:55:29.57435474 +0000 UTC m=+13.495910640" watchObservedRunningTime="2025-11-23 10:55:31.579701386 +0000 UTC m=+15.501257278"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.689932    1545 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.722056    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-zb6c5" podStartSLOduration=11.778808213 podCreationTimestamp="2025-11-23 10:55:28 +0000 UTC" firstStartedPulling="2025-11-23 10:55:29.166586204 +0000 UTC m=+13.088142096" lastFinishedPulling="2025-11-23 10:55:31.109797113 +0000 UTC m=+15.031353004" observedRunningTime="2025-11-23 10:55:31.581628027 +0000 UTC m=+15.503183918" watchObservedRunningTime="2025-11-23 10:55:41.722019121 +0000 UTC m=+25.643575013"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.722408    1545 topology_manager.go:215] "Topology Admit Handler" podUID="5bb94c83-477d-49aa-9ade-b2404e214905" podNamespace="kube-system" podName="coredns-5dd5756b68-cxm6d"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.728765    1545 topology_manager.go:215] "Topology Admit Handler" podUID="3c9ddafc-e744-4085-ab89-dace2cd10a03" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.806388    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bb94c83-477d-49aa-9ade-b2404e214905-config-volume\") pod \"coredns-5dd5756b68-cxm6d\" (UID: \"5bb94c83-477d-49aa-9ade-b2404e214905\") " pod="kube-system/coredns-5dd5756b68-cxm6d"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.806474    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69f29\" (UniqueName: \"kubernetes.io/projected/5bb94c83-477d-49aa-9ade-b2404e214905-kube-api-access-69f29\") pod \"coredns-5dd5756b68-cxm6d\" (UID: \"5bb94c83-477d-49aa-9ade-b2404e214905\") " pod="kube-system/coredns-5dd5756b68-cxm6d"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.806515    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/3c9ddafc-e744-4085-ab89-dace2cd10a03-tmp\") pod \"storage-provisioner\" (UID: \"3c9ddafc-e744-4085-ab89-dace2cd10a03\") " pod="kube-system/storage-provisioner"
	Nov 23 10:55:41 old-k8s-version-162750 kubelet[1545]: I1123 10:55:41.806548    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkjr8\" (UniqueName: \"kubernetes.io/projected/3c9ddafc-e744-4085-ab89-dace2cd10a03-kube-api-access-dkjr8\") pod \"storage-provisioner\" (UID: \"3c9ddafc-e744-4085-ab89-dace2cd10a03\") " pod="kube-system/storage-provisioner"
	Nov 23 10:55:42 old-k8s-version-162750 kubelet[1545]: I1123 10:55:42.625897    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-cxm6d" podStartSLOduration=14.625853822 podCreationTimestamp="2025-11-23 10:55:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:55:42.604047235 +0000 UTC m=+26.525603126" watchObservedRunningTime="2025-11-23 10:55:42.625853822 +0000 UTC m=+26.547409722"
	Nov 23 10:55:44 old-k8s-version-162750 kubelet[1545]: I1123 10:55:44.753352    1545 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.753311461 podCreationTimestamp="2025-11-23 10:55:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:55:42.66406931 +0000 UTC m=+26.585625210" watchObservedRunningTime="2025-11-23 10:55:44.753311461 +0000 UTC m=+28.674867352"
	Nov 23 10:55:44 old-k8s-version-162750 kubelet[1545]: I1123 10:55:44.753537    1545 topology_manager.go:215] "Topology Admit Handler" podUID="2dd7549c-5bf6-4864-9a27-188c6854aedd" podNamespace="default" podName="busybox"
	Nov 23 10:55:44 old-k8s-version-162750 kubelet[1545]: I1123 10:55:44.847562    1545 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b5t8\" (UniqueName: \"kubernetes.io/projected/2dd7549c-5bf6-4864-9a27-188c6854aedd-kube-api-access-9b5t8\") pod \"busybox\" (UID: \"2dd7549c-5bf6-4864-9a27-188c6854aedd\") " pod="default/busybox"
	
	
	==> storage-provisioner [5a70cbe1d834be2f436a15054465484da38394906a319ebb6c0e0dbc33118d77] <==
	I1123 10:55:42.457667       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:55:42.481260       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:55:42.481489       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 10:55:42.489314       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:55:42.489547       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162750_50e25990-0844-41df-ae82-fbe719da9f7e!
	I1123 10:55:42.490536       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b658e7d2-aa8d-4b62-a5b7-ea9d07cb7dad", APIVersion:"v1", ResourceVersion:"399", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-162750_50e25990-0844-41df-ae82-fbe719da9f7e became leader
	I1123 10:55:42.590544       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-162750_50e25990-0844-41df-ae82-fbe719da9f7e!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162750 -n old-k8s-version-162750
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-162750 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.57s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.8s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-055571 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bd8008cd-cc28-45d9-8fa2-06099a099993] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bd8008cd-cc28-45d9-8fa2-06099a099993] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003983557s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-055571 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-055571
helpers_test.go:243: (dbg) docker inspect no-preload-055571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f",
	        "Created": "2025-11-23T10:57:25.748729169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1792875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:57:25.809718116Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f/hostname",
	        "HostsPath": "/var/lib/docker/containers/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f/hosts",
	        "LogPath": "/var/lib/docker/containers/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f-json.log",
	        "Name": "/no-preload-055571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-055571:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-055571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f",
	                "LowerDir": "/var/lib/docker/overlay2/5b539834b2428236c9253cad6fc0efaa94fadf221210ee96b438109a6933e4ba-init/diff:/var/lib/docker/overlay2/fe0bef51c968206096993e9a75db2143cd9cd74d56696a257291ce63f851a2d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b539834b2428236c9253cad6fc0efaa94fadf221210ee96b438109a6933e4ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b539834b2428236c9253cad6fc0efaa94fadf221210ee96b438109a6933e4ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b539834b2428236c9253cad6fc0efaa94fadf221210ee96b438109a6933e4ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-055571",
	                "Source": "/var/lib/docker/volumes/no-preload-055571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-055571",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-055571",
	                "name.minikube.sigs.k8s.io": "no-preload-055571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43ba3fa5ee2e08815e78dcdf9fb17cb6a09a78e27da95aae4294ce284c4a82f2",
	            "SandboxKey": "/var/run/docker/netns/43ba3fa5ee2e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35264"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35265"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35268"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35266"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35267"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-055571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:da:53:a4:00:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "91ca09998ccaffe4fc03ffa1431b438347753da0802ed94bcca33ae2c6c74c52",
	                    "EndpointID": "3eb3b6ee7e3cd0676328889505428983d4b757e83bb47a7cb82059cd6e68bfa5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-055571",
	                        "59920e023310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-055571 -n no-preload-055571
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-055571 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-055571 logs -n 25: (1.205382033s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p force-systemd-env-479166 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ delete  │ -p kubernetes-upgrade-871841                                                                                                                                                                                                                        │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-679101    │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ force-systemd-env-479166 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p force-systemd-env-479166                                                                                                                                                                                                                         │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p cert-options-501705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ cert-options-501705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ -p cert-options-501705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p cert-options-501705                                                                                                                                                                                                                              │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:55 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:55 UTC │ 23 Nov 25 10:55 UTC │
	│ stop    │ -p old-k8s-version-162750 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:55 UTC │ 23 Nov 25 10:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:56 UTC │ 23 Nov 25 10:56 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:56 UTC │ 23 Nov 25 10:57 UTC │
	│ image   │ old-k8s-version-162750 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ pause   │ -p old-k8s-version-162750 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ unpause │ -p old-k8s-version-162750 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p old-k8s-version-162750                                                                                                                                                                                                                           │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p old-k8s-version-162750                                                                                                                                                                                                                           │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ start   │ -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-055571         │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:58 UTC │
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-679101    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p cert-expiration-679101                                                                                                                                                                                                                           │ cert-expiration-679101    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ start   │ -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-969029        │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:57:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:57:42.839913 1795697 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:57:42.840123 1795697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:57:42.840150 1795697 out.go:374] Setting ErrFile to fd 2...
	I1123 10:57:42.840168 1795697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:57:42.840448 1795697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:57:42.840879 1795697 out.go:368] Setting JSON to false
	I1123 10:57:42.841855 1795697 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":42008,"bootTime":1763853455,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:57:42.841951 1795697 start.go:143] virtualization:  
	I1123 10:57:42.846872 1795697 out.go:179] * [embed-certs-969029] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:57:42.851762 1795697 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:57:42.851911 1795697 notify.go:221] Checking for updates...
	I1123 10:57:42.859066 1795697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:57:42.862605 1795697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:57:42.865870 1795697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:57:42.869272 1795697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:57:42.872575 1795697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:57:42.876502 1795697 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:57:42.876603 1795697 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:57:42.912096 1795697 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:57:42.912222 1795697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:57:42.993718 1795697 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 10:57:42.982225706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:57:42.993823 1795697 docker.go:319] overlay module found
	I1123 10:57:42.997399 1795697 out.go:179] * Using the docker driver based on user configuration
	I1123 10:57:43.000327 1795697 start.go:309] selected driver: docker
	I1123 10:57:43.000352 1795697 start.go:927] validating driver "docker" against <nil>
	I1123 10:57:43.000366 1795697 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:57:43.001183 1795697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:57:43.102998 1795697 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 10:57:43.087640463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:57:43.103144 1795697 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:57:43.103389 1795697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:57:43.106395 1795697 out.go:179] * Using Docker driver with root privileges
	I1123 10:57:43.109329 1795697 cni.go:84] Creating CNI manager for ""
	I1123 10:57:43.109410 1795697 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:57:43.109419 1795697 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:57:43.109509 1795697 start.go:353] cluster config:
	{Name:embed-certs-969029 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:57:43.112637 1795697 out.go:179] * Starting "embed-certs-969029" primary control-plane node in "embed-certs-969029" cluster
	I1123 10:57:43.115853 1795697 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 10:57:43.118785 1795697 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:57:43.121668 1795697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:57:43.121714 1795697 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 10:57:43.121724 1795697 cache.go:65] Caching tarball of preloaded images
	I1123 10:57:43.121810 1795697 preload.go:238] Found /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 10:57:43.121820 1795697 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 10:57:43.121933 1795697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/config.json ...
	I1123 10:57:43.121951 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/config.json: {Name:mkf41a7bab235d324f39d66779e47beeeede1b81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:43.122094 1795697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:57:43.145884 1795697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:57:43.145909 1795697 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:57:43.145923 1795697 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:57:43.145958 1795697 start.go:360] acquireMachinesLock for embed-certs-969029: {Name:mk4f9a35c261c685efd8080b5b8d7f71b5a367c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:57:43.146148 1795697 start.go:364] duration metric: took 97.278µs to acquireMachinesLock for "embed-certs-969029"
	I1123 10:57:43.146187 1795697 start.go:93] Provisioning new machine with config: &{Name:embed-certs-969029 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:57:43.146264 1795697 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:57:40.100126 1792569 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.433855052s)
	I1123 10:57:40.100153 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 10:57:40.100174 1792569 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 10:57:40.100241 1792569 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 10:57:40.100309 1792569 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.434169465s)
	I1123 10:57:40.100330 1792569 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1123 10:57:40.100346 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1123 10:57:41.785338 1792569 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.685067667s)
	I1123 10:57:41.785370 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 10:57:41.785395 1792569 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 10:57:41.785444 1792569 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 10:57:43.408541 1792569 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.62284182s)
	I1123 10:57:43.408573 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 10:57:43.408597 1792569 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 10:57:43.408650 1792569 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1123 10:57:43.150675 1795697 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:57:43.150972 1795697 start.go:159] libmachine.API.Create for "embed-certs-969029" (driver="docker")
	I1123 10:57:43.151008 1795697 client.go:173] LocalClient.Create starting
	I1123 10:57:43.151094 1795697 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem
	I1123 10:57:43.151132 1795697 main.go:143] libmachine: Decoding PEM data...
	I1123 10:57:43.151153 1795697 main.go:143] libmachine: Parsing certificate...
	I1123 10:57:43.151236 1795697 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem
	I1123 10:57:43.151260 1795697 main.go:143] libmachine: Decoding PEM data...
	I1123 10:57:43.151272 1795697 main.go:143] libmachine: Parsing certificate...
	I1123 10:57:43.151664 1795697 cli_runner.go:164] Run: docker network inspect embed-certs-969029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:57:43.171968 1795697 cli_runner.go:211] docker network inspect embed-certs-969029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:57:43.172051 1795697 network_create.go:284] running [docker network inspect embed-certs-969029] to gather additional debugging logs...
	I1123 10:57:43.172071 1795697 cli_runner.go:164] Run: docker network inspect embed-certs-969029
	W1123 10:57:43.186804 1795697 cli_runner.go:211] docker network inspect embed-certs-969029 returned with exit code 1
	I1123 10:57:43.186831 1795697 network_create.go:287] error running [docker network inspect embed-certs-969029]: docker network inspect embed-certs-969029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-969029 not found
	I1123 10:57:43.186862 1795697 network_create.go:289] output of [docker network inspect embed-certs-969029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-969029 not found
	
	** /stderr **
	I1123 10:57:43.186955 1795697 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:57:43.214759 1795697 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e44f782e1ead IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ae:ef:b1:2b:de} reservation:<nil>}
	I1123 10:57:43.215072 1795697 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d795300f262d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:f7:c2:f9:ad:5b} reservation:<nil>}
	I1123 10:57:43.215391 1795697 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4b6f246690b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:41:9a:79:92:5d} reservation:<nil>}
	I1123 10:57:43.215782 1795697 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001997d60}
	I1123 10:57:43.215799 1795697 network_create.go:124] attempt to create docker network embed-certs-969029 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 10:57:43.215853 1795697 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-969029 embed-certs-969029
	I1123 10:57:43.292255 1795697 network_create.go:108] docker network embed-certs-969029 192.168.76.0/24 created
	I1123 10:57:43.292284 1795697 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-969029" container
	I1123 10:57:43.292355 1795697 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:57:43.319143 1795697 cli_runner.go:164] Run: docker volume create embed-certs-969029 --label name.minikube.sigs.k8s.io=embed-certs-969029 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:57:43.339868 1795697 oci.go:103] Successfully created a docker volume embed-certs-969029
	I1123 10:57:43.339965 1795697 cli_runner.go:164] Run: docker run --rm --name embed-certs-969029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-969029 --entrypoint /usr/bin/test -v embed-certs-969029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:57:44.152961 1795697 oci.go:107] Successfully prepared a docker volume embed-certs-969029
	I1123 10:57:44.153026 1795697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:57:44.153039 1795697 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:57:44.153117 1795697 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-969029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:57:47.573582 1792569 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (4.164903737s)
	I1123 10:57:47.573608 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 10:57:47.573627 1792569 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 10:57:47.573673 1792569 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1123 10:57:48.488395 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 10:57:48.488435 1792569 cache_images.go:125] Successfully loaded all cached images
	I1123 10:57:48.488441 1792569 cache_images.go:94] duration metric: took 15.345382978s to LoadCachedImages
	I1123 10:57:48.488457 1792569 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 10:57:48.488556 1792569 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-055571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:57:48.488623 1792569 ssh_runner.go:195] Run: sudo crictl info
	I1123 10:57:48.518362 1792569 cni.go:84] Creating CNI manager for ""
	I1123 10:57:48.518389 1792569 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:57:48.518404 1792569 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:57:48.518426 1792569 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-055571 NodeName:no-preload-055571 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:57:48.518546 1792569 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-055571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:57:48.518621 1792569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:57:48.528227 1792569 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 10:57:48.528295 1792569 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 10:57:48.537037 1792569 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1123 10:57:48.537136 1792569 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 10:57:48.539474 1792569 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1123 10:57:48.539929 1792569 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1123 10:57:48.542688 1792569 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 10:57:48.542718 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1123 10:57:49.335463 1792569 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 10:57:49.387528 1792569 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 10:57:49.387621 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1123 10:57:49.556141 1792569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:57:49.593421 1792569 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 10:57:50.724726 1795697 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-969029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (6.571573486s)
	I1123 10:57:50.724760 1795697 kic.go:203] duration metric: took 6.571716859s to extract preloaded images to volume ...
	W1123 10:57:50.724899 1795697 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:57:50.725017 1795697 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:57:50.810679 1795697 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-969029 --name embed-certs-969029 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-969029 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-969029 --network embed-certs-969029 --ip 192.168.76.2 --volume embed-certs-969029:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:57:51.331720 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Running}}
	I1123 10:57:51.360964 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:57:51.393781 1795697 cli_runner.go:164] Run: docker exec embed-certs-969029 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:57:51.471381 1795697 oci.go:144] the created container "embed-certs-969029" has a running status.
	I1123 10:57:51.471408 1795697 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa...
	I1123 10:57:51.797180 1795697 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:57:51.828904 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:57:51.877446 1795697 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:57:51.877464 1795697 kic_runner.go:114] Args: [docker exec --privileged embed-certs-969029 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:57:51.994943 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:57:52.067594 1795697 machine.go:94] provisionDockerMachine start ...
	I1123 10:57:52.067692 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:52.129200 1795697 main.go:143] libmachine: Using SSH client type: native
	I1123 10:57:52.129536 1795697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35269 <nil> <nil>}
	I1123 10:57:52.129545 1795697 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:57:52.130260 1795697 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:57:49.607962 1792569 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 10:57:49.608000 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1123 10:57:50.545220 1792569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:57:50.554754 1792569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 10:57:50.569039 1792569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:57:50.582934 1792569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 10:57:50.597055 1792569 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:57:50.601474 1792569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:57:50.617831 1792569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:57:50.738862 1792569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:57:50.773571 1792569 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571 for IP: 192.168.85.2
	I1123 10:57:50.773589 1792569 certs.go:195] generating shared ca certs ...
	I1123 10:57:50.773606 1792569 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:50.773745 1792569 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 10:57:50.773784 1792569 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 10:57:50.773791 1792569 certs.go:257] generating profile certs ...
	I1123 10:57:50.773844 1792569 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.key
	I1123 10:57:50.773854 1792569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt with IP's: []
	I1123 10:57:51.502401 1792569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt ...
	I1123 10:57:51.502431 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt: {Name:mkbee6e4ac8c95d3a8dd5df5f98c472e8c937edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:51.502601 1792569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.key ...
	I1123 10:57:51.502608 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.key: {Name:mka6432625140a8eeb602cdb110a2eae12603dec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:51.502689 1792569 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb
	I1123 10:57:51.502702 1792569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt.3d6856fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 10:57:51.702999 1792569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt.3d6856fb ...
	I1123 10:57:51.708327 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt.3d6856fb: {Name:mka8fb05df05904acd54dcd24c79da07b3426e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:51.708558 1792569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb ...
	I1123 10:57:51.708595 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb: {Name:mk407188a5e29b7d8747d3ad610977a67fe0d62a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:51.708709 1792569 certs.go:382] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt.3d6856fb -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt
	I1123 10:57:51.708819 1792569 certs.go:386] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key
	I1123 10:57:51.708918 1792569 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key
	I1123 10:57:51.708970 1792569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt with IP's: []
	I1123 10:57:52.423510 1792569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt ...
	I1123 10:57:52.423586 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt: {Name:mk4e0c09d8874f5df249851d07529a0f2c40b6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:52.423798 1792569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key ...
	I1123 10:57:52.423849 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key: {Name:mk042917f7e0a317d4013b2378dcad1fa9f2480e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:52.424064 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 10:57:52.424142 1792569 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 10:57:52.424169 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:57:52.424225 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:57:52.424271 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:57:52.424323 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 10:57:52.424392 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:57:52.424992 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:57:52.445164 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:57:52.464302 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:57:52.484112 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:57:52.511567 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:57:52.532926 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:57:52.555613 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:57:52.578773 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:57:52.600363 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 10:57:52.621075 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:57:52.650939 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 10:57:52.684998 1792569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:57:52.704219 1792569 ssh_runner.go:195] Run: openssl version
	I1123 10:57:52.711621 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 10:57:52.721631 1792569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 10:57:52.728847 1792569 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 10:57:52.728992 1792569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 10:57:52.773442 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:57:52.782774 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:57:52.791623 1792569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:52.796077 1792569 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:52.796157 1792569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:52.838515 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:57:52.848284 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 10:57:52.857445 1792569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 10:57:52.862188 1792569 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 10:57:52.862255 1792569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 10:57:52.907158 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 10:57:52.916811 1792569 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:57:52.921477 1792569 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:57:52.921528 1792569 kubeadm.go:401] StartCluster: {Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:57:52.921601 1792569 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 10:57:52.921663 1792569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:57:52.953463 1792569 cri.go:89] found id: ""
	I1123 10:57:52.953538 1792569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:57:52.963616 1792569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:57:52.974015 1792569 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:57:52.974107 1792569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:57:52.983464 1792569 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:57:52.983491 1792569 kubeadm.go:158] found existing configuration files:
	
	I1123 10:57:52.983573 1792569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:57:52.991976 1792569 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:57:52.992083 1792569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:57:53.000385 1792569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:57:53.010690 1792569 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:57:53.010767 1792569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:57:53.019318 1792569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:57:53.028157 1792569 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:57:53.028274 1792569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:57:53.036653 1792569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:57:53.045127 1792569 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:57:53.045245 1792569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:57:53.053849 1792569 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:57:53.091263 1792569 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:57:53.091516 1792569 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:57:53.114552 1792569 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:57:53.114665 1792569 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:57:53.114729 1792569 kubeadm.go:319] OS: Linux
	I1123 10:57:53.114798 1792569 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:57:53.114871 1792569 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:57:53.114943 1792569 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:57:53.115015 1792569 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:57:53.115085 1792569 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:57:53.115155 1792569 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:57:53.115295 1792569 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:57:53.115371 1792569 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:57:53.115435 1792569 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:57:53.181157 1792569 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:57:53.181308 1792569 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:57:53.181426 1792569 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:57:53.187649 1792569 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:57:53.193213 1792569 out.go:252]   - Generating certificates and keys ...
	I1123 10:57:53.193329 1792569 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:57:53.193441 1792569 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:57:53.635677 1792569 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:57:53.940028 1792569 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:57:54.283695 1792569 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:57:54.352503 1792569 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:57:55.282665 1795697 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-969029
	
	I1123 10:57:55.282686 1795697 ubuntu.go:182] provisioning hostname "embed-certs-969029"
	I1123 10:57:55.282747 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:55.302481 1795697 main.go:143] libmachine: Using SSH client type: native
	I1123 10:57:55.302790 1795697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35269 <nil> <nil>}
	I1123 10:57:55.302800 1795697 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-969029 && echo "embed-certs-969029" | sudo tee /etc/hostname
	I1123 10:57:55.465285 1795697 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-969029
	
	I1123 10:57:55.465371 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:55.490313 1795697 main.go:143] libmachine: Using SSH client type: native
	I1123 10:57:55.490626 1795697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35269 <nil> <nil>}
	I1123 10:57:55.490649 1795697 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-969029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-969029/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-969029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:57:55.643083 1795697 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:57:55.643152 1795697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-1582671/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-1582671/.minikube}
	I1123 10:57:55.643227 1795697 ubuntu.go:190] setting up certificates
	I1123 10:57:55.643251 1795697 provision.go:84] configureAuth start
	I1123 10:57:55.643339 1795697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-969029
	I1123 10:57:55.664896 1795697 provision.go:143] copyHostCerts
	I1123 10:57:55.664968 1795697 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem, removing ...
	I1123 10:57:55.664989 1795697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem
	I1123 10:57:55.665062 1795697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem (1078 bytes)
	I1123 10:57:55.665162 1795697 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem, removing ...
	I1123 10:57:55.665167 1795697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem
	I1123 10:57:55.665193 1795697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem (1123 bytes)
	I1123 10:57:55.665249 1795697 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem, removing ...
	I1123 10:57:55.665254 1795697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem
	I1123 10:57:55.665276 1795697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem (1675 bytes)
	I1123 10:57:55.665329 1795697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem org=jenkins.embed-certs-969029 san=[127.0.0.1 192.168.76.2 embed-certs-969029 localhost minikube]
	I1123 10:57:55.742234 1795697 provision.go:177] copyRemoteCerts
	I1123 10:57:55.742322 1795697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:57:55.742377 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:55.760349 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:55.872000 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:57:55.890974 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 10:57:55.909794 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:57:55.928464 1795697 provision.go:87] duration metric: took 285.179862ms to configureAuth
	I1123 10:57:55.928545 1795697 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:57:55.928755 1795697 config.go:182] Loaded profile config "embed-certs-969029": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:57:55.928784 1795697 machine.go:97] duration metric: took 3.861171385s to provisionDockerMachine
	I1123 10:57:55.928804 1795697 client.go:176] duration metric: took 12.777785538s to LocalClient.Create
	I1123 10:57:55.928862 1795697 start.go:167] duration metric: took 12.777878179s to libmachine.API.Create "embed-certs-969029"
	I1123 10:57:55.928886 1795697 start.go:293] postStartSetup for "embed-certs-969029" (driver="docker")
	I1123 10:57:55.928907 1795697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:57:55.928992 1795697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:57:55.929050 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:55.950375 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:56.060824 1795697 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:57:56.065050 1795697 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:57:56.065082 1795697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:57:56.065094 1795697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/addons for local assets ...
	I1123 10:57:56.065153 1795697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/files for local assets ...
	I1123 10:57:56.065232 1795697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem -> 15845322.pem in /etc/ssl/certs
	I1123 10:57:56.065346 1795697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:57:56.074152 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:57:56.094834 1795697 start.go:296] duration metric: took 165.921304ms for postStartSetup
	I1123 10:57:56.095294 1795697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-969029
	I1123 10:57:56.115348 1795697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/config.json ...
	I1123 10:57:56.115626 1795697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:57:56.115684 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:56.141015 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:56.244705 1795697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:57:56.250208 1795697 start.go:128] duration metric: took 13.103928476s to createHost
	I1123 10:57:56.250230 1795697 start.go:83] releasing machines lock for "embed-certs-969029", held for 13.104066737s
	I1123 10:57:56.250296 1795697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-969029
	I1123 10:57:56.282499 1795697 ssh_runner.go:195] Run: cat /version.json
	I1123 10:57:56.282557 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:56.282801 1795697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:57:56.282872 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:56.319084 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:56.335581 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:56.439326 1795697 ssh_runner.go:195] Run: systemctl --version
	I1123 10:57:56.540865 1795697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:57:56.545527 1795697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:57:56.545596 1795697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:57:56.579443 1795697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:57:56.579468 1795697 start.go:496] detecting cgroup driver to use...
	I1123 10:57:56.579504 1795697 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:57:56.579552 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 10:57:56.596677 1795697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 10:57:56.612491 1795697 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:57:56.612556 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:57:56.630742 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:57:56.650223 1795697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:57:56.804727 1795697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:57:56.957660 1795697 docker.go:234] disabling docker service ...
	I1123 10:57:56.957836 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:57:56.984783 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:57:57.000150 1795697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:57:57.161282 1795697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:57:57.323626 1795697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:57:57.340437 1795697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:57:57.355167 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 10:57:57.364455 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 10:57:57.373132 1795697 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 10:57:57.373212 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 10:57:57.381985 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:57:57.390608 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 10:57:57.399309 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:57:57.408511 1795697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:57:57.416425 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 10:57:57.424992 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 10:57:57.433593 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 10:57:57.442531 1795697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:57:57.450454 1795697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:57:57.457980 1795697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:57:57.606681 1795697 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 10:57:57.760211 1795697 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 10:57:57.760278 1795697 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 10:57:57.769904 1795697 start.go:564] Will wait 60s for crictl version
	I1123 10:57:57.769979 1795697 ssh_runner.go:195] Run: which crictl
	I1123 10:57:57.775811 1795697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:57:57.834639 1795697 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 10:57:57.834707 1795697 ssh_runner.go:195] Run: containerd --version
	I1123 10:57:57.856790 1795697 ssh_runner.go:195] Run: containerd --version
	I1123 10:57:57.886604 1795697 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 10:57:55.132396 1792569 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:57:55.132704 1792569 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-055571] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:57:55.488239 1792569 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:57:55.488872 1792569 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-055571] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:57:55.764663 1792569 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:57:56.327671 1792569 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:57:57.114559 1792569 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:57:57.114790 1792569 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:57:57.914686 1792569 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:57:58.382912 1792569 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:57:59.357491 1792569 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:57:57.889556 1795697 cli_runner.go:164] Run: docker network inspect embed-certs-969029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:57:57.909308 1795697 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:57:57.913617 1795697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:57:57.933684 1795697 kubeadm.go:884] updating cluster {Name:embed-certs-969029 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:57:57.933806 1795697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:57:57.933875 1795697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:57:57.971980 1795697 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:57:57.972001 1795697 containerd.go:534] Images already preloaded, skipping extraction
	I1123 10:57:57.972059 1795697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:57:58.007250 1795697 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:57:58.007274 1795697 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:57:58.007282 1795697 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 10:57:58.007382 1795697 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-969029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:57:58.007453 1795697 ssh_runner.go:195] Run: sudo crictl info
	I1123 10:57:58.041845 1795697 cni.go:84] Creating CNI manager for ""
	I1123 10:57:58.041912 1795697 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:57:58.041946 1795697 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:57:58.041998 1795697 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-969029 NodeName:embed-certs-969029 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:57:58.042156 1795697 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-969029"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:57:58.042263 1795697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:57:58.051543 1795697 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:57:58.051621 1795697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:57:58.060487 1795697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 10:57:58.075450 1795697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:57:58.090632 1795697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1123 10:57:58.105597 1795697 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:57:58.109568 1795697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:57:58.119646 1795697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:57:58.256910 1795697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:57:58.273968 1795697 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029 for IP: 192.168.76.2
	I1123 10:57:58.273993 1795697 certs.go:195] generating shared ca certs ...
	I1123 10:57:58.274009 1795697 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.274139 1795697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 10:57:58.274188 1795697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 10:57:58.274200 1795697 certs.go:257] generating profile certs ...
	I1123 10:57:58.274252 1795697 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.key
	I1123 10:57:58.274268 1795697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.crt with IP's: []
	I1123 10:57:58.476429 1795697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.crt ...
	I1123 10:57:58.476462 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.crt: {Name:mkee9096516671ab77910576bc03c62248bda2bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.476688 1795697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.key ...
	I1123 10:57:58.476706 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.key: {Name:mk9b1f1b88acd9142be294a0df14524c2c54f523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.476816 1795697 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key.2df6413d
	I1123 10:57:58.476836 1795697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt.2df6413d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 10:57:58.662905 1795697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt.2df6413d ...
	I1123 10:57:58.662940 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt.2df6413d: {Name:mk64545c82de12695ead4c4465b64ab1441d6148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.663442 1795697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key.2df6413d ...
	I1123 10:57:58.663466 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key.2df6413d: {Name:mkc69de968d64cff294fd00a05314da14bf3a6bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.663581 1795697 certs.go:382] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt.2df6413d -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt
	I1123 10:57:58.663665 1795697 certs.go:386] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key.2df6413d -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key
	I1123 10:57:58.663725 1795697 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.key
	I1123 10:57:58.663744 1795697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.crt with IP's: []
	I1123 10:57:59.311710 1795697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.crt ...
	I1123 10:57:59.311742 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.crt: {Name:mkf9f184bb31794e506028794b68db494704fc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:59.311967 1795697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.key ...
	I1123 10:57:59.311986 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.key: {Name:mkf3aa14db4c8b267911397ca446ad4d01c79151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:59.312191 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 10:57:59.312238 1795697 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 10:57:59.312252 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:57:59.312282 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:57:59.312311 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:57:59.312342 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 10:57:59.312395 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:57:59.313070 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:57:59.330246 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:57:59.349672 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:57:59.369124 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:57:59.388761 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:57:59.408702 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:57:59.427629 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:57:59.446313 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:57:59.465526 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 10:57:59.484539 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:57:59.504150 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 10:57:59.523804 1795697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:57:59.537969 1795697 ssh_runner.go:195] Run: openssl version
	I1123 10:57:59.544762 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 10:57:59.553751 1795697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 10:57:59.557994 1795697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 10:57:59.558115 1795697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 10:57:59.600699 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:57:59.609939 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:57:59.618453 1795697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:59.622742 1795697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:59.622857 1795697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:59.675169 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:57:59.689423 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 10:57:59.700804 1795697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 10:57:59.706163 1795697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 10:57:59.706280 1795697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 10:57:59.764704 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 10:57:59.778823 1795697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:57:59.782896 1795697 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:57:59.783005 1795697 kubeadm.go:401] StartCluster: {Name:embed-certs-969029 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:57:59.783111 1795697 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 10:57:59.783215 1795697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:57:59.816341 1795697 cri.go:89] found id: ""
	I1123 10:57:59.816454 1795697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:57:59.826602 1795697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:57:59.834879 1795697 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:57:59.834980 1795697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:57:59.845020 1795697 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:57:59.845087 1795697 kubeadm.go:158] found existing configuration files:
	
	I1123 10:57:59.845166 1795697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:57:59.853715 1795697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:57:59.853818 1795697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:57:59.861340 1795697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:57:59.869832 1795697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:57:59.869944 1795697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:57:59.877916 1795697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:57:59.886562 1795697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:57:59.886673 1795697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:57:59.894614 1795697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:57:59.903210 1795697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:57:59.903318 1795697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:57:59.911044 1795697 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:57:59.963278 1795697 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:57:59.963711 1795697 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:57:59.992315 1795697 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:57:59.992473 1795697 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:57:59.992515 1795697 kubeadm.go:319] OS: Linux
	I1123 10:57:59.992564 1795697 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:57:59.992617 1795697 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:57:59.992668 1795697 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:57:59.992720 1795697 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:57:59.992771 1795697 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:57:59.992835 1795697 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:57:59.992885 1795697 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:57:59.992936 1795697 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:57:59.992988 1795697 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:58:00.134130 1795697 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:58:00.134362 1795697 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:58:00.134520 1795697 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:58:00.152742 1795697 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:57:59.708655 1792569 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:58:00.710816 1792569 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:58:00.713800 1792569 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:58:00.719748 1792569 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:58:00.158453 1795697 out.go:252]   - Generating certificates and keys ...
	I1123 10:58:00.158651 1795697 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:58:00.160157 1795697 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:58:00.880001 1795697 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:58:01.849562 1795697 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:58:02.223520 1795697 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:58:00.723242 1792569 out.go:252]   - Booting up control plane ...
	I1123 10:58:00.723352 1792569 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:58:00.723430 1792569 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:58:00.724296 1792569 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:58:00.759784 1792569 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:58:00.759893 1792569 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:58:00.769910 1792569 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:58:00.770010 1792569 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:58:00.770050 1792569 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:58:00.938641 1792569 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:58:00.938762 1792569 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:58:01.938884 1792569 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001744759s
	I1123 10:58:01.942322 1792569 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:58:01.942652 1792569 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 10:58:01.942749 1792569 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:58:01.942829 1792569 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:58:02.999540 1795697 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:58:03.115533 1795697 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:58:03.115684 1795697 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-969029 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:58:03.623538 1795697 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:58:03.623675 1795697 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-969029 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:58:04.403543 1795697 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:58:04.815542 1795697 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:58:05.099515 1795697 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:58:05.099596 1795697 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:58:06.184788 1795697 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:58:06.663895 1795697 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:58:06.957961 1795697 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:58:07.647273 1795697 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:58:08.192343 1795697 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:58:08.193468 1795697 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:58:08.196403 1795697 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:58:07.726887 1792569 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.784042217s
	I1123 10:58:10.919434 1792569 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.977030135s
	I1123 10:58:11.444064 1792569 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.501488698s
	I1123 10:58:11.473606 1792569 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:58:11.501493 1792569 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:58:11.530344 1792569 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:58:11.530780 1792569 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-055571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:58:11.542495 1792569 kubeadm.go:319] [bootstrap-token] Using token: 2awhk1.t6olsn12sy2o68lm
	I1123 10:58:08.199731 1795697 out.go:252]   - Booting up control plane ...
	I1123 10:58:08.199835 1795697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:58:08.199946 1795697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:58:08.200817 1795697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:58:08.226839 1795697 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:58:08.226951 1795697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:58:08.238318 1795697 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:58:08.238442 1795697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:58:08.238482 1795697 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:58:08.464903 1795697 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:58:08.465024 1795697 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:58:10.967594 1795697 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.500905325s
	I1123 10:58:10.969133 1795697 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:58:10.969227 1795697 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 10:58:10.969316 1795697 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:58:10.969395 1795697 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:58:11.545733 1792569 out.go:252]   - Configuring RBAC rules ...
	I1123 10:58:11.545856 1792569 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:58:11.555324 1792569 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:58:11.564503 1792569 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:58:11.569410 1792569 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:58:11.574428 1792569 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:58:11.592193 1792569 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:58:11.850759 1792569 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:58:12.361239 1792569 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:58:12.855836 1792569 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:58:12.857310 1792569 kubeadm.go:319] 
	I1123 10:58:12.857395 1792569 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:58:12.857404 1792569 kubeadm.go:319] 
	I1123 10:58:12.857482 1792569 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:58:12.857486 1792569 kubeadm.go:319] 
	I1123 10:58:12.857510 1792569 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:58:12.857955 1792569 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:58:12.858019 1792569 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:58:12.858024 1792569 kubeadm.go:319] 
	I1123 10:58:12.858078 1792569 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:58:12.858082 1792569 kubeadm.go:319] 
	I1123 10:58:12.858129 1792569 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:58:12.858133 1792569 kubeadm.go:319] 
	I1123 10:58:12.858185 1792569 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:58:12.858260 1792569 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:58:12.858328 1792569 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:58:12.858331 1792569 kubeadm.go:319] 
	I1123 10:58:12.858626 1792569 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:58:12.858750 1792569 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:58:12.858771 1792569 kubeadm.go:319] 
	I1123 10:58:12.859054 1792569 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2awhk1.t6olsn12sy2o68lm \
	I1123 10:58:12.859269 1792569 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 \
	I1123 10:58:12.859496 1792569 kubeadm.go:319] 	--control-plane 
	I1123 10:58:12.859533 1792569 kubeadm.go:319] 
	I1123 10:58:12.859807 1792569 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:58:12.859843 1792569 kubeadm.go:319] 
	I1123 10:58:12.860121 1792569 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2awhk1.t6olsn12sy2o68lm \
	I1123 10:58:12.860445 1792569 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 
	I1123 10:58:12.879795 1792569 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:58:12.880191 1792569 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:58:12.880316 1792569 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:58:12.880328 1792569 cni.go:84] Creating CNI manager for ""
	I1123 10:58:12.880335 1792569 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:58:12.883457 1792569 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:58:12.886352 1792569 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:58:12.897399 1792569 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:58:12.897424 1792569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:58:12.937711 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:58:13.558689 1792569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:58:13.558818 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:13.558908 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-055571 minikube.k8s.io/updated_at=2025_11_23T10_58_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=no-preload-055571 minikube.k8s.io/primary=true
	I1123 10:58:13.887832 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:13.887902 1792569 ops.go:34] apiserver oom_adj: -16
	I1123 10:58:14.388399 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:14.888402 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:15.388249 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:15.888335 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:16.388370 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:16.887885 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:17.387892 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:17.551562 1792569 kubeadm.go:1114] duration metric: took 3.992788154s to wait for elevateKubeSystemPrivileges
	I1123 10:58:17.551601 1792569 kubeadm.go:403] duration metric: took 24.630076642s to StartCluster
	I1123 10:58:17.551628 1792569 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:58:17.551689 1792569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:58:17.552359 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:58:17.552557 1792569 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:58:17.552644 1792569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:58:17.552871 1792569 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:58:17.552913 1792569 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:58:17.552973 1792569 addons.go:70] Setting storage-provisioner=true in profile "no-preload-055571"
	I1123 10:58:17.552987 1792569 addons.go:239] Setting addon storage-provisioner=true in "no-preload-055571"
	I1123 10:58:17.553017 1792569 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:58:17.553778 1792569 addons.go:70] Setting default-storageclass=true in profile "no-preload-055571"
	I1123 10:58:17.553803 1792569 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-055571"
	I1123 10:58:17.554047 1792569 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:58:17.554217 1792569 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:58:17.556255 1792569 out.go:179] * Verifying Kubernetes components...
	I1123 10:58:17.560465 1792569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:58:17.586400 1792569 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:58:14.077262 1795697 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.10770246s
	I1123 10:58:16.784994 1795697 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.815747071s
	I1123 10:58:18.972269 1795697 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002967348s
	I1123 10:58:19.003844 1795697 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:58:19.027174 1795697 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:58:19.053834 1795697 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:58:19.054046 1795697 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-969029 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:58:19.069633 1795697 kubeadm.go:319] [bootstrap-token] Using token: kq6vm6.09lpm1jjzme9srb8
	I1123 10:58:17.591738 1792569 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:58:17.591762 1792569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:58:17.591824 1792569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:58:17.598220 1792569 addons.go:239] Setting addon default-storageclass=true in "no-preload-055571"
	I1123 10:58:17.598257 1792569 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:58:17.598665 1792569 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:58:17.636275 1792569 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:58:17.636296 1792569 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:58:17.636360 1792569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:58:17.647304 1792569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:58:17.674310 1792569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:58:18.026058 1792569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:58:18.032171 1792569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:58:18.137552 1792569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:58:18.164955 1792569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:58:19.190287 1792569 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.158078667s)
	I1123 10:58:19.190677 1792569 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.164575716s)
	I1123 10:58:19.190782 1792569 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:58:19.191747 1792569 node_ready.go:35] waiting up to 6m0s for node "no-preload-055571" to be "Ready" ...
	I1123 10:58:19.192094 1792569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.054515669s)
	I1123 10:58:19.696391 1792569 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-055571" context rescaled to 1 replicas
	I1123 10:58:19.754683 1792569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.589692988s)
	I1123 10:58:19.757953 1792569 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 10:58:19.072843 1795697 out.go:252]   - Configuring RBAC rules ...
	I1123 10:58:19.072972 1795697 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:58:19.078435 1795697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:58:19.096054 1795697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:58:19.103935 1795697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:58:19.110233 1795697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:58:19.114190 1795697 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:58:19.380116 1795697 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:58:19.853763 1795697 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:58:20.384200 1795697 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:58:20.384221 1795697 kubeadm.go:319] 
	I1123 10:58:20.384281 1795697 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:58:20.384292 1795697 kubeadm.go:319] 
	I1123 10:58:20.384369 1795697 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:58:20.384373 1795697 kubeadm.go:319] 
	I1123 10:58:20.384398 1795697 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:58:20.384457 1795697 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:58:20.384507 1795697 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:58:20.384511 1795697 kubeadm.go:319] 
	I1123 10:58:20.384565 1795697 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:58:20.384569 1795697 kubeadm.go:319] 
	I1123 10:58:20.384616 1795697 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:58:20.384620 1795697 kubeadm.go:319] 
	I1123 10:58:20.384671 1795697 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:58:20.384747 1795697 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:58:20.384815 1795697 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:58:20.384818 1795697 kubeadm.go:319] 
	I1123 10:58:20.384909 1795697 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:58:20.384986 1795697 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:58:20.384990 1795697 kubeadm.go:319] 
	I1123 10:58:20.385075 1795697 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kq6vm6.09lpm1jjzme9srb8 \
	I1123 10:58:20.385187 1795697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 \
	I1123 10:58:20.385208 1795697 kubeadm.go:319] 	--control-plane 
	I1123 10:58:20.385212 1795697 kubeadm.go:319] 
	I1123 10:58:20.385297 1795697 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:58:20.385301 1795697 kubeadm.go:319] 
	I1123 10:58:20.385383 1795697 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kq6vm6.09lpm1jjzme9srb8 \
	I1123 10:58:20.385485 1795697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 
	I1123 10:58:20.391080 1795697 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:58:20.391334 1795697 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:58:20.391439 1795697 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:58:20.391695 1795697 cni.go:84] Creating CNI manager for ""
	I1123 10:58:20.391747 1795697 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:58:20.397876 1795697 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:58:20.401083 1795697 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:58:20.408114 1795697 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:58:20.408133 1795697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:58:20.451737 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:58:21.313782 1795697 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:58:21.313918 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:21.313996 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-969029 minikube.k8s.io/updated_at=2025_11_23T10_58_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=embed-certs-969029 minikube.k8s.io/primary=true
	I1123 10:58:21.681382 1795697 ops.go:34] apiserver oom_adj: -16
	I1123 10:58:21.681502 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:22.181651 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:22.682511 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:19.761028 1792569 addons.go:530] duration metric: took 2.208104641s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1123 10:58:21.195305 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	W1123 10:58:23.694994 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	I1123 10:58:23.182431 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:23.681720 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:24.182384 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:24.681606 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:24.820728 1795697 kubeadm.go:1114] duration metric: took 3.506859118s to wait for elevateKubeSystemPrivileges
	I1123 10:58:24.820770 1795697 kubeadm.go:403] duration metric: took 25.037770717s to StartCluster
	I1123 10:58:24.820787 1795697 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:58:24.820846 1795697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:58:24.822156 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:58:24.822380 1795697 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:58:24.822482 1795697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:58:24.822708 1795697 config.go:182] Loaded profile config "embed-certs-969029": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:58:24.822748 1795697 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:58:24.822810 1795697 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-969029"
	I1123 10:58:24.822824 1795697 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-969029"
	I1123 10:58:24.822846 1795697 host.go:66] Checking if "embed-certs-969029" exists ...
	I1123 10:58:24.823508 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:58:24.823758 1795697 addons.go:70] Setting default-storageclass=true in profile "embed-certs-969029"
	I1123 10:58:24.823781 1795697 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-969029"
	I1123 10:58:24.824064 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:58:24.825696 1795697 out.go:179] * Verifying Kubernetes components...
	I1123 10:58:24.829148 1795697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:58:24.858324 1795697 addons.go:239] Setting addon default-storageclass=true in "embed-certs-969029"
	I1123 10:58:24.858366 1795697 host.go:66] Checking if "embed-certs-969029" exists ...
	I1123 10:58:24.858783 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:58:24.858810 1795697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:58:24.861785 1795697 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:58:24.861814 1795697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:58:24.861887 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:58:24.896275 1795697 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:58:24.896294 1795697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:58:24.896359 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:58:24.902497 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:58:24.930606 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:58:25.116648 1795697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:58:25.155865 1795697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:58:25.222564 1795697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:58:25.227175 1795697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:58:25.753018 1795697 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 10:58:25.754206 1795697 node_ready.go:35] waiting up to 6m0s for node "embed-certs-969029" to be "Ready" ...
	I1123 10:58:26.247838 1795697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025232871s)
	I1123 10:58:26.247883 1795697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020662733s)
	I1123 10:58:26.261999 1795697 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-969029" context rescaled to 1 replicas
	I1123 10:58:26.266987 1795697 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:58:26.269804 1795697 addons.go:530] duration metric: took 1.447044393s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 10:58:27.757157 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:25.696695 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	W1123 10:58:28.194618 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	W1123 10:58:29.757766 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:32.257448 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:30.195010 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	I1123 10:58:32.694577 1792569 node_ready.go:49] node "no-preload-055571" is "Ready"
	I1123 10:58:32.694605 1792569 node_ready.go:38] duration metric: took 13.502835455s for node "no-preload-055571" to be "Ready" ...
	I1123 10:58:32.694633 1792569 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:58:32.694690 1792569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:58:32.713722 1792569 api_server.go:72] duration metric: took 15.161117804s to wait for apiserver process to appear ...
	I1123 10:58:32.713750 1792569 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:58:32.713768 1792569 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:58:32.722152 1792569 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:58:32.723232 1792569 api_server.go:141] control plane version: v1.34.1
	I1123 10:58:32.723259 1792569 api_server.go:131] duration metric: took 9.501898ms to wait for apiserver health ...
	I1123 10:58:32.723269 1792569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:58:32.726622 1792569 system_pods.go:59] 8 kube-system pods found
	I1123 10:58:32.726687 1792569 system_pods.go:61] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:32.726695 1792569 system_pods.go:61] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:32.726701 1792569 system_pods.go:61] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:32.726718 1792569 system_pods.go:61] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:32.726723 1792569 system_pods.go:61] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:32.726734 1792569 system_pods.go:61] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:32.726738 1792569 system_pods.go:61] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:32.726743 1792569 system_pods.go:61] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:32.726753 1792569 system_pods.go:74] duration metric: took 3.479063ms to wait for pod list to return data ...
	I1123 10:58:32.726761 1792569 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:58:32.730173 1792569 default_sa.go:45] found service account: "default"
	I1123 10:58:32.730200 1792569 default_sa.go:55] duration metric: took 3.432581ms for default service account to be created ...
	I1123 10:58:32.730211 1792569 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:58:32.733322 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:32.733355 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:32.733361 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:32.733367 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:32.733398 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:32.733409 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:32.733420 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:32.733428 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:32.733434 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:32.733467 1792569 retry.go:31] will retry after 304.408436ms: missing components: kube-dns
	I1123 10:58:33.043545 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:33.043587 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:33.043594 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:33.043601 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:33.043624 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:33.043645 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:33.043650 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:33.043654 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:33.043660 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:33.043680 1792569 retry.go:31] will retry after 243.372863ms: missing components: kube-dns
	I1123 10:58:33.292875 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:33.292917 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:33.292924 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:33.292932 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:33.292936 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:33.292941 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:33.292945 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:33.292951 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:33.292962 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:33.292980 1792569 retry.go:31] will retry after 393.510988ms: missing components: kube-dns
	I1123 10:58:33.690917 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:33.690951 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:33.690957 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:33.690975 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:33.690980 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:33.690991 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:33.690995 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:33.691002 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:33.691008 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:33.691028 1792569 retry.go:31] will retry after 395.162605ms: missing components: kube-dns
	I1123 10:58:34.090690 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:34.090724 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Running
	I1123 10:58:34.090732 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:34.090736 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:34.090741 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:34.090745 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:34.090774 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:34.090791 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:34.090796 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Running
	I1123 10:58:34.090805 1792569 system_pods.go:126] duration metric: took 1.360587353s to wait for k8s-apps to be running ...
	I1123 10:58:34.090819 1792569 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:58:34.090888 1792569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:58:34.105966 1792569 system_svc.go:56] duration metric: took 15.138003ms WaitForService to wait for kubelet
	I1123 10:58:34.106049 1792569 kubeadm.go:587] duration metric: took 16.553460815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:58:34.106087 1792569 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:58:34.108802 1792569 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:58:34.108868 1792569 node_conditions.go:123] node cpu capacity is 2
	I1123 10:58:34.108888 1792569 node_conditions.go:105] duration metric: took 2.794336ms to run NodePressure ...
	I1123 10:58:34.108911 1792569 start.go:242] waiting for startup goroutines ...
	I1123 10:58:34.108920 1792569 start.go:247] waiting for cluster config update ...
	I1123 10:58:34.108934 1792569 start.go:256] writing updated cluster config ...
	I1123 10:58:34.109230 1792569 ssh_runner.go:195] Run: rm -f paused
	I1123 10:58:34.114678 1792569 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:58:34.118912 1792569 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b9hss" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.124043 1792569 pod_ready.go:94] pod "coredns-66bc5c9577-b9hss" is "Ready"
	I1123 10:58:34.124068 1792569 pod_ready.go:86] duration metric: took 5.132092ms for pod "coredns-66bc5c9577-b9hss" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.126325 1792569 pod_ready.go:83] waiting for pod "etcd-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.130964 1792569 pod_ready.go:94] pod "etcd-no-preload-055571" is "Ready"
	I1123 10:58:34.130991 1792569 pod_ready.go:86] duration metric: took 4.642841ms for pod "etcd-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.133398 1792569 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.137785 1792569 pod_ready.go:94] pod "kube-apiserver-no-preload-055571" is "Ready"
	I1123 10:58:34.137813 1792569 pod_ready.go:86] duration metric: took 4.391729ms for pod "kube-apiserver-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.139953 1792569 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.519757 1792569 pod_ready.go:94] pod "kube-controller-manager-no-preload-055571" is "Ready"
	I1123 10:58:34.519785 1792569 pod_ready.go:86] duration metric: took 379.805212ms for pod "kube-controller-manager-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.719622 1792569 pod_ready.go:83] waiting for pod "kube-proxy-6fnf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:35.119531 1792569 pod_ready.go:94] pod "kube-proxy-6fnf4" is "Ready"
	I1123 10:58:35.119562 1792569 pod_ready.go:86] duration metric: took 399.913949ms for pod "kube-proxy-6fnf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:35.320206 1792569 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:35.719247 1792569 pod_ready.go:94] pod "kube-scheduler-no-preload-055571" is "Ready"
	I1123 10:58:35.719276 1792569 pod_ready.go:86] duration metric: took 399.042609ms for pod "kube-scheduler-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:35.719290 1792569 pod_ready.go:40] duration metric: took 1.604573715s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:58:35.781193 1792569 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:58:35.784238 1792569 out.go:179] * Done! kubectl is now configured to use "no-preload-055571" cluster and "default" namespace by default
	W1123 10:58:34.257523 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:36.757002 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:38.757330 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:41.257031 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d3933774c00d4       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   3b993fd84303c       busybox                                     default
	141dfe2fe2c0f       138784d87c9c5       13 seconds ago      Running             coredns                   0                   d974ac78f94e9       coredns-66bc5c9577-b9hss                    kube-system
	f6ff857443149       66749159455b3       13 seconds ago      Running             storage-provisioner       0                   d1f48d3c6cebe       storage-provisioner                         kube-system
	9d45eab165f42       b1a8c6f707935       24 seconds ago      Running             kindnet-cni               0                   b17e3b0d4e95c       kindnet-4gsp7                               kube-system
	8b471e7e9bbda       05baa95f5142d       27 seconds ago      Running             kube-proxy                0                   f63480f595d4c       kube-proxy-6fnf4                            kube-system
	2f827144cf7fa       b5f57ec6b9867       43 seconds ago      Running             kube-scheduler            0                   d31fa2bd01cdf       kube-scheduler-no-preload-055571            kube-system
	14b800b67ad60       43911e833d64d       43 seconds ago      Running             kube-apiserver            0                   bdeb99787d352       kube-apiserver-no-preload-055571            kube-system
	6249f178fb08f       a1894772a478e       43 seconds ago      Running             etcd                      0                   91ccf8efa3085       etcd-no-preload-055571                      kube-system
	eab30623258b2       7eb2c6ff0c5a7       43 seconds ago      Running             kube-controller-manager   0                   768404e862924       kube-controller-manager-no-preload-055571   kube-system
	
	
	==> containerd <==
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.763682305Z" level=info msg="connecting to shim f6ff8574431495f4a49d9c3759b8049dfc4450cdb014fcd3928c598ca2c0da52" address="unix:///run/containerd/s/d08a8c552fa361ee5ea50b7dd1664ba292c4bdff815e226fa159fee0b232e032" protocol=ttrpc version=3
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.795830349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b9hss,Uid:dc7b7825-8cc7-46c1-97fa-1be6181d2214,Namespace:kube-system,Attempt:0,} returns sandbox id \"d974ac78f94e9d193e6905d8824355b1ec638405eb0cda8e5e8ce71da22f74c3\""
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.809430740Z" level=info msg="CreateContainer within sandbox \"d974ac78f94e9d193e6905d8824355b1ec638405eb0cda8e5e8ce71da22f74c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.823466945Z" level=info msg="Container 141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.836032202Z" level=info msg="CreateContainer within sandbox \"d974ac78f94e9d193e6905d8824355b1ec638405eb0cda8e5e8ce71da22f74c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611\""
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.839402426Z" level=info msg="StartContainer for \"141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611\""
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.843417615Z" level=info msg="connecting to shim 141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611" address="unix:///run/containerd/s/fc599243d01e9e25cb12964bc3826e733bbfdbc246e95d7714a26ac91a1c2a90" protocol=ttrpc version=3
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.851437391Z" level=info msg="StartContainer for \"f6ff8574431495f4a49d9c3759b8049dfc4450cdb014fcd3928c598ca2c0da52\" returns successfully"
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.961499446Z" level=info msg="StartContainer for \"141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611\" returns successfully"
	Nov 23 10:58:36 no-preload-055571 containerd[756]: time="2025-11-23T10:58:36.303485087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bd8008cd-cc28-45d9-8fa2-06099a099993,Namespace:default,Attempt:0,}"
	Nov 23 10:58:36 no-preload-055571 containerd[756]: time="2025-11-23T10:58:36.357106970Z" level=info msg="connecting to shim 3b993fd84303cb56d6227974a0d1ce802d8c685ce52d96d0a12fbf0599769fdb" address="unix:///run/containerd/s/da8fd57c9ae84815f9922fc211e36017ce1a9753536b85e1b44e9b080aee848c" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 10:58:36 no-preload-055571 containerd[756]: time="2025-11-23T10:58:36.425254202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bd8008cd-cc28-45d9-8fa2-06099a099993,Namespace:default,Attempt:0,} returns sandbox id \"3b993fd84303cb56d6227974a0d1ce802d8c685ce52d96d0a12fbf0599769fdb\""
	Nov 23 10:58:36 no-preload-055571 containerd[756]: time="2025-11-23T10:58:36.429014143Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.683882941Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.686328912Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937187"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.688777049Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.692204437Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.692809000Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.263750772s"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.692921671Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.700835176Z" level=info msg="CreateContainer within sandbox \"3b993fd84303cb56d6227974a0d1ce802d8c685ce52d96d0a12fbf0599769fdb\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.716991058Z" level=info msg="Container d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.737815806Z" level=info msg="CreateContainer within sandbox \"3b993fd84303cb56d6227974a0d1ce802d8c685ce52d96d0a12fbf0599769fdb\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342\""
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.739010575Z" level=info msg="StartContainer for \"d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342\""
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.740311302Z" level=info msg="connecting to shim d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342" address="unix:///run/containerd/s/da8fd57c9ae84815f9922fc211e36017ce1a9753536b85e1b44e9b080aee848c" protocol=ttrpc version=3
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.813544112Z" level=info msg="StartContainer for \"d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342\" returns successfully"
	
	
	==> coredns [141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58191 - 5103 "HINFO IN 9040134774686138549.247589159770230753. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023388887s
	
	
	==> describe nodes <==
	Name:               no-preload-055571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-055571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-055571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_58_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:58:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-055571
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:58:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:58:43 +0000   Sun, 23 Nov 2025 10:58:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:58:43 +0000   Sun, 23 Nov 2025 10:58:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:58:43 +0000   Sun, 23 Nov 2025 10:58:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:58:43 +0000   Sun, 23 Nov 2025 10:58:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-055571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                6bebf923-fe25-46fc-b159-ca4a7a3f5ae9
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-b9hss                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-no-preload-055571                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         34s
	  kube-system                 kindnet-4gsp7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-055571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-no-preload-055571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-6fnf4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-055571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 26s                kube-proxy       
	  Normal   NodeHasSufficientMemory  45s (x8 over 45s)  kubelet          Node no-preload-055571 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    45s (x8 over 45s)  kubelet          Node no-preload-055571 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     45s (x7 over 45s)  kubelet          Node no-preload-055571 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  45s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 34s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  34s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 34s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  33s                kubelet          Node no-preload-055571 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    33s                kubelet          Node no-preload-055571 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     33s                kubelet          Node no-preload-055571 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           29s                node-controller  Node no-preload-055571 event: Registered Node no-preload-055571 in Controller
	  Normal   NodeReady                14s                kubelet          Node no-preload-055571 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:50] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [6249f178fb08fff7a76e05ef2091e7236bff165ee849beeba741138fd5d4e5d1] <==
	{"level":"warn","ts":"2025-11-23T10:58:06.412136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.446019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.515630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.551601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.590270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.671980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.759457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.813858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.846638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.887373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.908422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.967467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.993846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.037245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.133335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.167366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.241979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.262336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.330494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.360098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.403398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.491395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.539476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.584339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.772051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58068","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:58:46 up 11:41,  0 user,  load average: 3.98, 3.31, 2.91
	Linux no-preload-055571 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9d45eab165f426941b46cacf4c992c6d8d994ff8d83232faff07678871d4234f] <==
	I1123 10:58:21.928686       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:58:21.928949       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:58:21.929080       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:58:21.929097       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:58:21.929119       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:58:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:58:22.132779       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:58:22.132807       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:58:22.132823       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:58:22.134102       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:58:22.333711       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:58:22.333738       1 metrics.go:72] Registering metrics
	I1123 10:58:22.333795       1 controller.go:711] "Syncing nftables rules"
	I1123 10:58:32.140501       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:58:32.140540       1 main.go:301] handling current node
	I1123 10:58:42.131803       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:58:42.131846       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14b800b67ad6052023ad76ace7ece6ce928c08d72e9876a0ba4ec63aa2fd2940] <==
	E1123 10:58:09.414765       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 10:58:09.417138       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:09.417350       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:58:09.441561       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:09.441873       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:58:09.475298       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:58:09.638531       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:58:09.768777       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:58:09.796733       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:58:09.797025       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:58:11.240756       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:58:11.305866       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:58:11.474151       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:58:11.502424       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 10:58:11.504381       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:58:11.517871       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:58:12.080599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:58:12.318687       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:58:12.359305       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:58:12.385093       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:58:17.444895       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:17.456167       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:17.957374       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:58:18.012353       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 10:58:45.288084       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35866: use of closed network connection
	
	
	==> kube-controller-manager [eab30623258b276d71d20e0094aa488fe2eaf689d062eb457557742f0cf5e8dd] <==
	I1123 10:58:17.170326       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:58:17.170368       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:58:17.171025       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:58:17.171628       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:58:17.171811       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:58:17.171992       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-055571"
	I1123 10:58:17.172131       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 10:58:17.172928       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:58:17.178140       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:58:17.186933       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:58:17.186966       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:58:17.186974       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:58:17.188457       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:58:17.188587       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:58:17.190102       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:58:17.202115       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:58:17.215101       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:58:17.219848       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:58:17.220316       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:58:17.221415       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:58:17.223113       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:58:17.223130       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:58:17.224038       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:58:17.224877       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:58:37.175924       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8b471e7e9bbda9cbfbea76934750632ac310334af415b16e44073b2e576eabc9] <==
	I1123 10:58:19.534534       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:58:19.678570       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:58:19.794940       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:58:19.794991       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:58:19.795065       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:58:19.879901       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:58:19.879971       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:58:19.896507       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:58:19.896947       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:58:19.896976       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:58:19.899125       1 config.go:200] "Starting service config controller"
	I1123 10:58:19.899136       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:58:19.899152       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:58:19.899156       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:58:19.899298       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:58:19.899307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:58:19.904100       1 config.go:309] "Starting node config controller"
	I1123 10:58:19.904115       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:58:19.904122       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:58:19.999270       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 10:58:19.999345       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:58:19.999632       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2f827144cf7fac652ccb74aef0066e57b21ecef01a8dcb73809e96022b694400] <==
	I1123 10:58:07.003458       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:58:10.876326       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:58:10.876515       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:58:10.876562       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:58:10.876596       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:58:10.908195       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:58:10.908461       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:58:10.911490       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:58:10.911586       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:58:10.911822       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:58:10.911639       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 10:58:10.920649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 10:58:12.315252       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:58:13 no-preload-055571 kubelet[2105]: E1123 10:58:13.756351    2105 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-055571\" already exists" pod="kube-system/kube-apiserver-no-preload-055571"
	Nov 23 10:58:13 no-preload-055571 kubelet[2105]: I1123 10:58:13.781341    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-055571" podStartSLOduration=1.781323142 podStartE2EDuration="1.781323142s" podCreationTimestamp="2025-11-23 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:13.764743613 +0000 UTC m=+1.526547442" watchObservedRunningTime="2025-11-23 10:58:13.781323142 +0000 UTC m=+1.543126947"
	Nov 23 10:58:13 no-preload-055571 kubelet[2105]: I1123 10:58:13.796728    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-055571" podStartSLOduration=1.79670892 podStartE2EDuration="1.79670892s" podCreationTimestamp="2025-11-23 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:13.782014383 +0000 UTC m=+1.543818180" watchObservedRunningTime="2025-11-23 10:58:13.79670892 +0000 UTC m=+1.558512725"
	Nov 23 10:58:13 no-preload-055571 kubelet[2105]: I1123 10:58:13.825998    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-055571" podStartSLOduration=1.8259800240000001 podStartE2EDuration="1.825980024s" podCreationTimestamp="2025-11-23 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:13.810504033 +0000 UTC m=+1.572307871" watchObservedRunningTime="2025-11-23 10:58:13.825980024 +0000 UTC m=+1.587783821"
	Nov 23 10:58:17 no-preload-055571 kubelet[2105]: I1123 10:58:17.134804    2105 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 10:58:17 no-preload-055571 kubelet[2105]: I1123 10:58:17.136842    2105 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.235373    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/004a7b4a-a9c1-47c9-bf13-e04773eb1112-lib-modules\") pod \"kindnet-4gsp7\" (UID: \"004a7b4a-a9c1-47c9-bf13-e04773eb1112\") " pod="kube-system/kindnet-4gsp7"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.235425    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/004a7b4a-a9c1-47c9-bf13-e04773eb1112-cni-cfg\") pod \"kindnet-4gsp7\" (UID: \"004a7b4a-a9c1-47c9-bf13-e04773eb1112\") " pod="kube-system/kindnet-4gsp7"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.235444    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/004a7b4a-a9c1-47c9-bf13-e04773eb1112-xtables-lock\") pod \"kindnet-4gsp7\" (UID: \"004a7b4a-a9c1-47c9-bf13-e04773eb1112\") " pod="kube-system/kindnet-4gsp7"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.235466    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmx2\" (UniqueName: \"kubernetes.io/projected/004a7b4a-a9c1-47c9-bf13-e04773eb1112-kube-api-access-wgmx2\") pod \"kindnet-4gsp7\" (UID: \"004a7b4a-a9c1-47c9-bf13-e04773eb1112\") " pod="kube-system/kindnet-4gsp7"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.342299    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2685bee3-d65c-4c1a-854d-2980a0e2bced-kube-proxy\") pod \"kube-proxy-6fnf4\" (UID: \"2685bee3-d65c-4c1a-854d-2980a0e2bced\") " pod="kube-system/kube-proxy-6fnf4"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.342473    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2685bee3-d65c-4c1a-854d-2980a0e2bced-lib-modules\") pod \"kube-proxy-6fnf4\" (UID: \"2685bee3-d65c-4c1a-854d-2980a0e2bced\") " pod="kube-system/kube-proxy-6fnf4"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.342505    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dq6w\" (UniqueName: \"kubernetes.io/projected/2685bee3-d65c-4c1a-854d-2980a0e2bced-kube-api-access-5dq6w\") pod \"kube-proxy-6fnf4\" (UID: \"2685bee3-d65c-4c1a-854d-2980a0e2bced\") " pod="kube-system/kube-proxy-6fnf4"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.342529    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2685bee3-d65c-4c1a-854d-2980a0e2bced-xtables-lock\") pod \"kube-proxy-6fnf4\" (UID: \"2685bee3-d65c-4c1a-854d-2980a0e2bced\") " pod="kube-system/kube-proxy-6fnf4"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.428319    2105 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:58:21 no-preload-055571 kubelet[2105]: I1123 10:58:21.809341    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4gsp7" podStartSLOduration=1.011567775 podStartE2EDuration="3.809315334s" podCreationTimestamp="2025-11-23 10:58:18 +0000 UTC" firstStartedPulling="2025-11-23 10:58:18.797568119 +0000 UTC m=+6.559371916" lastFinishedPulling="2025-11-23 10:58:21.595315661 +0000 UTC m=+9.357119475" observedRunningTime="2025-11-23 10:58:21.80899837 +0000 UTC m=+9.570802167" watchObservedRunningTime="2025-11-23 10:58:21.809315334 +0000 UTC m=+9.571119131"
	Nov 23 10:58:21 no-preload-055571 kubelet[2105]: I1123 10:58:21.810153    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6fnf4" podStartSLOduration=3.81014043 podStartE2EDuration="3.81014043s" podCreationTimestamp="2025-11-23 10:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:19.790610375 +0000 UTC m=+7.552414213" watchObservedRunningTime="2025-11-23 10:58:21.81014043 +0000 UTC m=+9.571944235"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.227923    2105 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.379859    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gjc9\" (UniqueName: \"kubernetes.io/projected/dc7b7825-8cc7-46c1-97fa-1be6181d2214-kube-api-access-6gjc9\") pod \"coredns-66bc5c9577-b9hss\" (UID: \"dc7b7825-8cc7-46c1-97fa-1be6181d2214\") " pod="kube-system/coredns-66bc5c9577-b9hss"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.380132    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/38d8473b-9b2a-451c-bc60-96e2e7cd2a7a-tmp\") pod \"storage-provisioner\" (UID: \"38d8473b-9b2a-451c-bc60-96e2e7cd2a7a\") " pod="kube-system/storage-provisioner"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.380178    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7b7825-8cc7-46c1-97fa-1be6181d2214-config-volume\") pod \"coredns-66bc5c9577-b9hss\" (UID: \"dc7b7825-8cc7-46c1-97fa-1be6181d2214\") " pod="kube-system/coredns-66bc5c9577-b9hss"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.380200    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l7zl\" (UniqueName: \"kubernetes.io/projected/38d8473b-9b2a-451c-bc60-96e2e7cd2a7a-kube-api-access-7l7zl\") pod \"storage-provisioner\" (UID: \"38d8473b-9b2a-451c-bc60-96e2e7cd2a7a\") " pod="kube-system/storage-provisioner"
	Nov 23 10:58:33 no-preload-055571 kubelet[2105]: I1123 10:58:33.845917    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b9hss" podStartSLOduration=15.845897702 podStartE2EDuration="15.845897702s" podCreationTimestamp="2025-11-23 10:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:33.833331378 +0000 UTC m=+21.595135183" watchObservedRunningTime="2025-11-23 10:58:33.845897702 +0000 UTC m=+21.607701499"
	Nov 23 10:58:33 no-preload-055571 kubelet[2105]: I1123 10:58:33.862719    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.862698524 podStartE2EDuration="14.862698524s" podCreationTimestamp="2025-11-23 10:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:33.847427126 +0000 UTC m=+21.609230931" watchObservedRunningTime="2025-11-23 10:58:33.862698524 +0000 UTC m=+21.624502321"
	Nov 23 10:58:36 no-preload-055571 kubelet[2105]: I1123 10:58:36.104813    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-965kh\" (UniqueName: \"kubernetes.io/projected/bd8008cd-cc28-45d9-8fa2-06099a099993-kube-api-access-965kh\") pod \"busybox\" (UID: \"bd8008cd-cc28-45d9-8fa2-06099a099993\") " pod="default/busybox"
	
	
	==> storage-provisioner [f6ff8574431495f4a49d9c3759b8049dfc4450cdb014fcd3928c598ca2c0da52] <==
	I1123 10:58:32.852978       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:58:32.866882       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:58:32.867460       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:58:32.869511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:32.875505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:58:32.875647       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:58:32.876546       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-055571_82bd6546-ee7f-445c-b100-d2f0794b24b9!
	I1123 10:58:32.884834       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c452823d-f421-47b4-ba83-5334871b3f15", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-055571_82bd6546-ee7f-445c-b100-d2f0794b24b9 became leader
	W1123 10:58:32.887915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:32.897272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:58:32.977519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-055571_82bd6546-ee7f-445c-b100-d2f0794b24b9!
	W1123 10:58:34.900783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:34.905678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:36.908985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:36.913795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:38.916703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:38.921454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:40.924462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:40.929100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:42.933886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:42.940562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:44.943822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:44.948331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-055571 -n no-preload-055571
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-055571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-055571
helpers_test.go:243: (dbg) docker inspect no-preload-055571:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f",
	        "Created": "2025-11-23T10:57:25.748729169Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1792875,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:57:25.809718116Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f/hostname",
	        "HostsPath": "/var/lib/docker/containers/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f/hosts",
	        "LogPath": "/var/lib/docker/containers/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f/59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f-json.log",
	        "Name": "/no-preload-055571",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-055571:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-055571",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "59920e0233106f972318a8941175c765c9d1d8a4f13f4df0301ae5a206cd622f",
	                "LowerDir": "/var/lib/docker/overlay2/5b539834b2428236c9253cad6fc0efaa94fadf221210ee96b438109a6933e4ba-init/diff:/var/lib/docker/overlay2/fe0bef51c968206096993e9a75db2143cd9cd74d56696a257291ce63f851a2d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5b539834b2428236c9253cad6fc0efaa94fadf221210ee96b438109a6933e4ba/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5b539834b2428236c9253cad6fc0efaa94fadf221210ee96b438109a6933e4ba/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5b539834b2428236c9253cad6fc0efaa94fadf221210ee96b438109a6933e4ba/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-055571",
	                "Source": "/var/lib/docker/volumes/no-preload-055571/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-055571",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-055571",
	                "name.minikube.sigs.k8s.io": "no-preload-055571",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "43ba3fa5ee2e08815e78dcdf9fb17cb6a09a78e27da95aae4294ce284c4a82f2",
	            "SandboxKey": "/var/run/docker/netns/43ba3fa5ee2e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35264"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35265"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35268"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35266"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35267"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-055571": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "4e:da:53:a4:00:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "91ca09998ccaffe4fc03ffa1431b438347753da0802ed94bcca33ae2c6c74c52",
	                    "EndpointID": "3eb3b6ee7e3cd0676328889505428983d4b757e83bb47a7cb82059cd6e68bfa5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-055571",
	                        "59920e023310"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-055571 -n no-preload-055571
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-055571 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-055571 logs -n 25: (1.212940212s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ start   │ -p force-systemd-env-479166 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │                     │
	│ start   │ -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ delete  │ -p kubernetes-upgrade-871841                                                                                                                                                                                                                        │ kubernetes-upgrade-871841 │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:53 UTC │
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-679101    │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ force-systemd-env-479166 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p force-systemd-env-479166                                                                                                                                                                                                                         │ force-systemd-env-479166  │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p cert-options-501705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ cert-options-501705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ -p cert-options-501705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p cert-options-501705                                                                                                                                                                                                                              │ cert-options-501705       │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:55 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:55 UTC │ 23 Nov 25 10:55 UTC │
	│ stop    │ -p old-k8s-version-162750 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:55 UTC │ 23 Nov 25 10:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:56 UTC │ 23 Nov 25 10:56 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:56 UTC │ 23 Nov 25 10:57 UTC │
	│ image   │ old-k8s-version-162750 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ pause   │ -p old-k8s-version-162750 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ unpause │ -p old-k8s-version-162750 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p old-k8s-version-162750                                                                                                                                                                                                                           │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p old-k8s-version-162750                                                                                                                                                                                                                           │ old-k8s-version-162750    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ start   │ -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-055571         │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:58 UTC │
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-679101    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p cert-expiration-679101                                                                                                                                                                                                                           │ cert-expiration-679101    │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ start   │ -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-969029        │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:57:42
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:57:42.839913 1795697 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:57:42.840123 1795697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:57:42.840150 1795697 out.go:374] Setting ErrFile to fd 2...
	I1123 10:57:42.840168 1795697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:57:42.840448 1795697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:57:42.840879 1795697 out.go:368] Setting JSON to false
	I1123 10:57:42.841855 1795697 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":42008,"bootTime":1763853455,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:57:42.841951 1795697 start.go:143] virtualization:  
	I1123 10:57:42.846872 1795697 out.go:179] * [embed-certs-969029] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:57:42.851762 1795697 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:57:42.851911 1795697 notify.go:221] Checking for updates...
	I1123 10:57:42.859066 1795697 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:57:42.862605 1795697 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:57:42.865870 1795697 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:57:42.869272 1795697 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:57:42.872575 1795697 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:57:42.876502 1795697 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:57:42.876603 1795697 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:57:42.912096 1795697 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:57:42.912222 1795697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:57:42.993718 1795697 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 10:57:42.982225706 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:57:42.993823 1795697 docker.go:319] overlay module found
	I1123 10:57:42.997399 1795697 out.go:179] * Using the docker driver based on user configuration
	I1123 10:57:43.000327 1795697 start.go:309] selected driver: docker
	I1123 10:57:43.000352 1795697 start.go:927] validating driver "docker" against <nil>
	I1123 10:57:43.000366 1795697 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:57:43.001183 1795697 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:57:43.102998 1795697 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:55 OomKillDisable:true NGoroutines:68 SystemTime:2025-11-23 10:57:43.087640463 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:57:43.103144 1795697 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:57:43.103389 1795697 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:57:43.106395 1795697 out.go:179] * Using Docker driver with root privileges
	I1123 10:57:43.109329 1795697 cni.go:84] Creating CNI manager for ""
	I1123 10:57:43.109410 1795697 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:57:43.109419 1795697 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:57:43.109509 1795697 start.go:353] cluster config:
	{Name:embed-certs-969029 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:57:43.112637 1795697 out.go:179] * Starting "embed-certs-969029" primary control-plane node in "embed-certs-969029" cluster
	I1123 10:57:43.115853 1795697 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 10:57:43.118785 1795697 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:57:43.121668 1795697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:57:43.121714 1795697 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 10:57:43.121724 1795697 cache.go:65] Caching tarball of preloaded images
	I1123 10:57:43.121810 1795697 preload.go:238] Found /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 10:57:43.121820 1795697 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 10:57:43.121933 1795697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/config.json ...
	I1123 10:57:43.121951 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/config.json: {Name:mkf41a7bab235d324f39d66779e47beeeede1b81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:43.122094 1795697 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:57:43.145884 1795697 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:57:43.145909 1795697 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:57:43.145923 1795697 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:57:43.145958 1795697 start.go:360] acquireMachinesLock for embed-certs-969029: {Name:mk4f9a35c261c685efd8080b5b8d7f71b5a367c4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:57:43.146148 1795697 start.go:364] duration metric: took 97.278µs to acquireMachinesLock for "embed-certs-969029"
	I1123 10:57:43.146187 1795697 start.go:93] Provisioning new machine with config: &{Name:embed-certs-969029 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:57:43.146264 1795697 start.go:125] createHost starting for "" (driver="docker")
	I1123 10:57:40.100126 1792569 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.433855052s)
	I1123 10:57:40.100153 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1123 10:57:40.100174 1792569 containerd.go:285] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 10:57:40.100241 1792569 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1123 10:57:40.100309 1792569 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.434169465s)
	I1123 10:57:40.100330 1792569 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1123 10:57:40.100346 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (8035840 bytes)
	I1123 10:57:41.785338 1792569 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.685067667s)
	I1123 10:57:41.785370 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1123 10:57:41.785395 1792569 containerd.go:285] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 10:57:41.785444 1792569 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1
	I1123 10:57:43.408541 1792569 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/kube-proxy_v1.34.1: (1.62284182s)
	I1123 10:57:43.408573 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1123 10:57:43.408597 1792569 containerd.go:285] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1123 10:57:43.408650 1792569 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0
	I1123 10:57:43.150675 1795697 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 10:57:43.150972 1795697 start.go:159] libmachine.API.Create for "embed-certs-969029" (driver="docker")
	I1123 10:57:43.151008 1795697 client.go:173] LocalClient.Create starting
	I1123 10:57:43.151094 1795697 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem
	I1123 10:57:43.151132 1795697 main.go:143] libmachine: Decoding PEM data...
	I1123 10:57:43.151153 1795697 main.go:143] libmachine: Parsing certificate...
	I1123 10:57:43.151236 1795697 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem
	I1123 10:57:43.151260 1795697 main.go:143] libmachine: Decoding PEM data...
	I1123 10:57:43.151272 1795697 main.go:143] libmachine: Parsing certificate...
	I1123 10:57:43.151664 1795697 cli_runner.go:164] Run: docker network inspect embed-certs-969029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 10:57:43.171968 1795697 cli_runner.go:211] docker network inspect embed-certs-969029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 10:57:43.172051 1795697 network_create.go:284] running [docker network inspect embed-certs-969029] to gather additional debugging logs...
	I1123 10:57:43.172071 1795697 cli_runner.go:164] Run: docker network inspect embed-certs-969029
	W1123 10:57:43.186804 1795697 cli_runner.go:211] docker network inspect embed-certs-969029 returned with exit code 1
	I1123 10:57:43.186831 1795697 network_create.go:287] error running [docker network inspect embed-certs-969029]: docker network inspect embed-certs-969029: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-969029 not found
	I1123 10:57:43.186862 1795697 network_create.go:289] output of [docker network inspect embed-certs-969029]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-969029 not found
	
	** /stderr **
	I1123 10:57:43.186955 1795697 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:57:43.214759 1795697 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e44f782e1ead IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ae:ef:b1:2b:de} reservation:<nil>}
	I1123 10:57:43.215072 1795697 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-d795300f262d IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:aa:f7:c2:f9:ad:5b} reservation:<nil>}
	I1123 10:57:43.215391 1795697 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4b6f246690b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:2a:41:9a:79:92:5d} reservation:<nil>}
	I1123 10:57:43.215782 1795697 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001997d60}
	I1123 10:57:43.215799 1795697 network_create.go:124] attempt to create docker network embed-certs-969029 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 10:57:43.215853 1795697 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-969029 embed-certs-969029
	I1123 10:57:43.292255 1795697 network_create.go:108] docker network embed-certs-969029 192.168.76.0/24 created
	I1123 10:57:43.292284 1795697 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-969029" container
	I1123 10:57:43.292355 1795697 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 10:57:43.319143 1795697 cli_runner.go:164] Run: docker volume create embed-certs-969029 --label name.minikube.sigs.k8s.io=embed-certs-969029 --label created_by.minikube.sigs.k8s.io=true
	I1123 10:57:43.339868 1795697 oci.go:103] Successfully created a docker volume embed-certs-969029
	I1123 10:57:43.339965 1795697 cli_runner.go:164] Run: docker run --rm --name embed-certs-969029-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-969029 --entrypoint /usr/bin/test -v embed-certs-969029:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 10:57:44.152961 1795697 oci.go:107] Successfully prepared a docker volume embed-certs-969029
	I1123 10:57:44.153026 1795697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:57:44.153039 1795697 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 10:57:44.153117 1795697 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-969029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 10:57:47.573582 1792569 ssh_runner.go:235] Completed: sudo ctr -n=k8s.io images import /var/lib/minikube/images/etcd_3.6.4-0: (4.164903737s)
	I1123 10:57:47.573608 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1123 10:57:47.573627 1792569 containerd.go:285] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1123 10:57:47.573673 1792569 ssh_runner.go:195] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/storage-provisioner_v5
	I1123 10:57:48.488395 1792569 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1123 10:57:48.488435 1792569 cache_images.go:125] Successfully loaded all cached images
	I1123 10:57:48.488441 1792569 cache_images.go:94] duration metric: took 15.345382978s to LoadCachedImages
	I1123 10:57:48.488457 1792569 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 10:57:48.488556 1792569 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-055571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:57:48.488623 1792569 ssh_runner.go:195] Run: sudo crictl info
	I1123 10:57:48.518362 1792569 cni.go:84] Creating CNI manager for ""
	I1123 10:57:48.518389 1792569 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:57:48.518404 1792569 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:57:48.518426 1792569 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-055571 NodeName:no-preload-055571 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:57:48.518546 1792569 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-055571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:57:48.518621 1792569 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:57:48.528227 1792569 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1123 10:57:48.528295 1792569 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1123 10:57:48.537037 1792569 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
	I1123 10:57:48.537136 1792569 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1123 10:57:48.539474 1792569 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubeadm
	I1123 10:57:48.539929 1792569 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubelet.sha256 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubelet
	I1123 10:57:48.542688 1792569 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1123 10:57:48.542718 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (58130616 bytes)
	I1123 10:57:49.335463 1792569 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1123 10:57:49.387528 1792569 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1123 10:57:49.387621 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (71434424 bytes)
	I1123 10:57:49.556141 1792569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:57:49.593421 1792569 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1123 10:57:50.724726 1795697 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-969029:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (6.571573486s)
	I1123 10:57:50.724760 1795697 kic.go:203] duration metric: took 6.571716859s to extract preloaded images to volume ...
	W1123 10:57:50.724899 1795697 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 10:57:50.725017 1795697 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 10:57:50.810679 1795697 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-969029 --name embed-certs-969029 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-969029 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-969029 --network embed-certs-969029 --ip 192.168.76.2 --volume embed-certs-969029:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 10:57:51.331720 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Running}}
	I1123 10:57:51.360964 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:57:51.393781 1795697 cli_runner.go:164] Run: docker exec embed-certs-969029 stat /var/lib/dpkg/alternatives/iptables
	I1123 10:57:51.471381 1795697 oci.go:144] the created container "embed-certs-969029" has a running status.
	I1123 10:57:51.471408 1795697 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa...
	I1123 10:57:51.797180 1795697 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 10:57:51.828904 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:57:51.877446 1795697 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 10:57:51.877464 1795697 kic_runner.go:114] Args: [docker exec --privileged embed-certs-969029 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 10:57:51.994943 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:57:52.067594 1795697 machine.go:94] provisionDockerMachine start ...
	I1123 10:57:52.067692 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:52.129200 1795697 main.go:143] libmachine: Using SSH client type: native
	I1123 10:57:52.129536 1795697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35269 <nil> <nil>}
	I1123 10:57:52.129545 1795697 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:57:52.130260 1795697 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:57:49.607962 1792569 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1123 10:57:49.608000 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (56426788 bytes)
	I1123 10:57:50.545220 1792569 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:57:50.554754 1792569 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 10:57:50.569039 1792569 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:57:50.582934 1792569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 10:57:50.597055 1792569 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:57:50.601474 1792569 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:57:50.617831 1792569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:57:50.738862 1792569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:57:50.773571 1792569 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571 for IP: 192.168.85.2
	I1123 10:57:50.773589 1792569 certs.go:195] generating shared ca certs ...
	I1123 10:57:50.773606 1792569 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:50.773745 1792569 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 10:57:50.773784 1792569 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 10:57:50.773791 1792569 certs.go:257] generating profile certs ...
	I1123 10:57:50.773844 1792569 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.key
	I1123 10:57:50.773854 1792569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt with IP's: []
	I1123 10:57:51.502401 1792569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt ...
	I1123 10:57:51.502431 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt: {Name:mkbee6e4ac8c95d3a8dd5df5f98c472e8c937edd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:51.502601 1792569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.key ...
	I1123 10:57:51.502608 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.key: {Name:mka6432625140a8eeb602cdb110a2eae12603dec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:51.502689 1792569 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb
	I1123 10:57:51.502702 1792569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt.3d6856fb with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 10:57:51.702999 1792569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt.3d6856fb ...
	I1123 10:57:51.708327 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt.3d6856fb: {Name:mka8fb05df05904acd54dcd24c79da07b3426e22 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:51.708558 1792569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb ...
	I1123 10:57:51.708595 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb: {Name:mk407188a5e29b7d8747d3ad610977a67fe0d62a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:51.708709 1792569 certs.go:382] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt.3d6856fb -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt
	I1123 10:57:51.708819 1792569 certs.go:386] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key
	I1123 10:57:51.708918 1792569 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key
	I1123 10:57:51.708970 1792569 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt with IP's: []
	I1123 10:57:52.423510 1792569 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt ...
	I1123 10:57:52.423586 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt: {Name:mk4e0c09d8874f5df249851d07529a0f2c40b6e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:52.423798 1792569 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key ...
	I1123 10:57:52.423849 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key: {Name:mk042917f7e0a317d4013b2378dcad1fa9f2480e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:52.424064 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 10:57:52.424142 1792569 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 10:57:52.424169 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:57:52.424225 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:57:52.424271 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:57:52.424323 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 10:57:52.424392 1792569 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:57:52.424992 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:57:52.445164 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:57:52.464302 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:57:52.484112 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:57:52.511567 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:57:52.532926 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:57:52.555613 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:57:52.578773 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:57:52.600363 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 10:57:52.621075 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:57:52.650939 1792569 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 10:57:52.684998 1792569 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:57:52.704219 1792569 ssh_runner.go:195] Run: openssl version
	I1123 10:57:52.711621 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 10:57:52.721631 1792569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 10:57:52.728847 1792569 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 10:57:52.728992 1792569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 10:57:52.773442 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:57:52.782774 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:57:52.791623 1792569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:52.796077 1792569 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:52.796157 1792569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:52.838515 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:57:52.848284 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 10:57:52.857445 1792569 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 10:57:52.862188 1792569 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 10:57:52.862255 1792569 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 10:57:52.907158 1792569 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 10:57:52.916811 1792569 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:57:52.921477 1792569 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:57:52.921528 1792569 kubeadm.go:401] StartCluster: {Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:57:52.921601 1792569 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 10:57:52.921663 1792569 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:57:52.953463 1792569 cri.go:89] found id: ""
	I1123 10:57:52.953538 1792569 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:57:52.963616 1792569 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:57:52.974015 1792569 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:57:52.974107 1792569 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:57:52.983464 1792569 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:57:52.983491 1792569 kubeadm.go:158] found existing configuration files:
	
	I1123 10:57:52.983573 1792569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:57:52.991976 1792569 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:57:52.992083 1792569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:57:53.000385 1792569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:57:53.010690 1792569 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:57:53.010767 1792569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:57:53.019318 1792569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:57:53.028157 1792569 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:57:53.028274 1792569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:57:53.036653 1792569 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:57:53.045127 1792569 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:57:53.045245 1792569 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:57:53.053849 1792569 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:57:53.091263 1792569 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:57:53.091516 1792569 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:57:53.114552 1792569 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:57:53.114665 1792569 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:57:53.114729 1792569 kubeadm.go:319] OS: Linux
	I1123 10:57:53.114798 1792569 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:57:53.114871 1792569 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:57:53.114943 1792569 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:57:53.115015 1792569 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:57:53.115085 1792569 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:57:53.115155 1792569 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:57:53.115295 1792569 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:57:53.115371 1792569 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:57:53.115435 1792569 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:57:53.181157 1792569 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:57:53.181308 1792569 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:57:53.181426 1792569 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:57:53.187649 1792569 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:57:53.193213 1792569 out.go:252]   - Generating certificates and keys ...
	I1123 10:57:53.193329 1792569 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:57:53.193441 1792569 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:57:53.635677 1792569 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:57:53.940028 1792569 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:57:54.283695 1792569 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:57:54.352503 1792569 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:57:55.282665 1795697 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-969029
	
	I1123 10:57:55.282686 1795697 ubuntu.go:182] provisioning hostname "embed-certs-969029"
	I1123 10:57:55.282747 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:55.302481 1795697 main.go:143] libmachine: Using SSH client type: native
	I1123 10:57:55.302790 1795697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35269 <nil> <nil>}
	I1123 10:57:55.302800 1795697 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-969029 && echo "embed-certs-969029" | sudo tee /etc/hostname
	I1123 10:57:55.465285 1795697 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-969029
	
	I1123 10:57:55.465371 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:55.490313 1795697 main.go:143] libmachine: Using SSH client type: native
	I1123 10:57:55.490626 1795697 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35269 <nil> <nil>}
	I1123 10:57:55.490649 1795697 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-969029' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-969029/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-969029' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:57:55.643083 1795697 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:57:55.643152 1795697 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-1582671/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-1582671/.minikube}
	I1123 10:57:55.643227 1795697 ubuntu.go:190] setting up certificates
	I1123 10:57:55.643251 1795697 provision.go:84] configureAuth start
	I1123 10:57:55.643339 1795697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-969029
	I1123 10:57:55.664896 1795697 provision.go:143] copyHostCerts
	I1123 10:57:55.664968 1795697 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem, removing ...
	I1123 10:57:55.664989 1795697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem
	I1123 10:57:55.665062 1795697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem (1078 bytes)
	I1123 10:57:55.665162 1795697 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem, removing ...
	I1123 10:57:55.665167 1795697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem
	I1123 10:57:55.665193 1795697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem (1123 bytes)
	I1123 10:57:55.665249 1795697 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem, removing ...
	I1123 10:57:55.665254 1795697 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem
	I1123 10:57:55.665276 1795697 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem (1675 bytes)
	I1123 10:57:55.665329 1795697 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem org=jenkins.embed-certs-969029 san=[127.0.0.1 192.168.76.2 embed-certs-969029 localhost minikube]
	I1123 10:57:55.742234 1795697 provision.go:177] copyRemoteCerts
	I1123 10:57:55.742322 1795697 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:57:55.742377 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:55.760349 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:55.872000 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:57:55.890974 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 10:57:55.909794 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:57:55.928464 1795697 provision.go:87] duration metric: took 285.179862ms to configureAuth
	I1123 10:57:55.928545 1795697 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:57:55.928755 1795697 config.go:182] Loaded profile config "embed-certs-969029": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:57:55.928784 1795697 machine.go:97] duration metric: took 3.861171385s to provisionDockerMachine
	I1123 10:57:55.928804 1795697 client.go:176] duration metric: took 12.777785538s to LocalClient.Create
	I1123 10:57:55.928862 1795697 start.go:167] duration metric: took 12.777878179s to libmachine.API.Create "embed-certs-969029"
	I1123 10:57:55.928886 1795697 start.go:293] postStartSetup for "embed-certs-969029" (driver="docker")
	I1123 10:57:55.928907 1795697 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:57:55.928992 1795697 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:57:55.929050 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:55.950375 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:56.060824 1795697 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:57:56.065050 1795697 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:57:56.065082 1795697 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:57:56.065094 1795697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/addons for local assets ...
	I1123 10:57:56.065153 1795697 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/files for local assets ...
	I1123 10:57:56.065232 1795697 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem -> 15845322.pem in /etc/ssl/certs
	I1123 10:57:56.065346 1795697 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:57:56.074152 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:57:56.094834 1795697 start.go:296] duration metric: took 165.921304ms for postStartSetup
	I1123 10:57:56.095294 1795697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-969029
	I1123 10:57:56.115348 1795697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/config.json ...
	I1123 10:57:56.115626 1795697 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:57:56.115684 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:56.141015 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:56.244705 1795697 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:57:56.250208 1795697 start.go:128] duration metric: took 13.103928476s to createHost
	I1123 10:57:56.250230 1795697 start.go:83] releasing machines lock for "embed-certs-969029", held for 13.104066737s
	I1123 10:57:56.250296 1795697 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-969029
	I1123 10:57:56.282499 1795697 ssh_runner.go:195] Run: cat /version.json
	I1123 10:57:56.282557 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:56.282801 1795697 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:57:56.282872 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:57:56.319084 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:56.335581 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:57:56.439326 1795697 ssh_runner.go:195] Run: systemctl --version
	I1123 10:57:56.540865 1795697 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:57:56.545527 1795697 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:57:56.545596 1795697 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:57:56.579443 1795697 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 10:57:56.579468 1795697 start.go:496] detecting cgroup driver to use...
	I1123 10:57:56.579504 1795697 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:57:56.579552 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 10:57:56.596677 1795697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 10:57:56.612491 1795697 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:57:56.612556 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:57:56.630742 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:57:56.650223 1795697 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:57:56.804727 1795697 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:57:56.957660 1795697 docker.go:234] disabling docker service ...
	I1123 10:57:56.957836 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:57:56.984783 1795697 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:57:57.000150 1795697 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:57:57.161282 1795697 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:57:57.323626 1795697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:57:57.340437 1795697 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:57:57.355167 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 10:57:57.364455 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 10:57:57.373132 1795697 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 10:57:57.373212 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 10:57:57.381985 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:57:57.390608 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 10:57:57.399309 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:57:57.408511 1795697 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:57:57.416425 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 10:57:57.424992 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 10:57:57.433593 1795697 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 10:57:57.442531 1795697 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:57:57.450454 1795697 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:57:57.457980 1795697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:57:57.606681 1795697 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 10:57:57.760211 1795697 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 10:57:57.760278 1795697 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 10:57:57.769904 1795697 start.go:564] Will wait 60s for crictl version
	I1123 10:57:57.769979 1795697 ssh_runner.go:195] Run: which crictl
	I1123 10:57:57.775811 1795697 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:57:57.834639 1795697 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 10:57:57.834707 1795697 ssh_runner.go:195] Run: containerd --version
	I1123 10:57:57.856790 1795697 ssh_runner.go:195] Run: containerd --version
	I1123 10:57:57.886604 1795697 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 10:57:55.132396 1792569 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:57:55.132704 1792569 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-055571] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:57:55.488239 1792569 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:57:55.488872 1792569 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-055571] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 10:57:55.764663 1792569 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:57:56.327671 1792569 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:57:57.114559 1792569 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:57:57.114790 1792569 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:57:57.914686 1792569 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:57:58.382912 1792569 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:57:59.357491 1792569 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:57:57.889556 1795697 cli_runner.go:164] Run: docker network inspect embed-certs-969029 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:57:57.909308 1795697 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 10:57:57.913617 1795697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:57:57.933684 1795697 kubeadm.go:884] updating cluster {Name:embed-certs-969029 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:57:57.933806 1795697 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:57:57.933875 1795697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:57:57.971980 1795697 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:57:57.972001 1795697 containerd.go:534] Images already preloaded, skipping extraction
	I1123 10:57:57.972059 1795697 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:57:58.007250 1795697 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:57:58.007274 1795697 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:57:58.007282 1795697 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 10:57:58.007382 1795697 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-969029 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:57:58.007453 1795697 ssh_runner.go:195] Run: sudo crictl info
	I1123 10:57:58.041845 1795697 cni.go:84] Creating CNI manager for ""
	I1123 10:57:58.041912 1795697 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:57:58.041946 1795697 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:57:58.041998 1795697 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-969029 NodeName:embed-certs-969029 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:57:58.042156 1795697 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-969029"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:57:58.042263 1795697 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:57:58.051543 1795697 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:57:58.051621 1795697 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:57:58.060487 1795697 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 10:57:58.075450 1795697 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:57:58.090632 1795697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1123 10:57:58.105597 1795697 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:57:58.109568 1795697 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:57:58.119646 1795697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:57:58.256910 1795697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:57:58.273968 1795697 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029 for IP: 192.168.76.2
	I1123 10:57:58.273993 1795697 certs.go:195] generating shared ca certs ...
	I1123 10:57:58.274009 1795697 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.274139 1795697 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 10:57:58.274188 1795697 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 10:57:58.274200 1795697 certs.go:257] generating profile certs ...
	I1123 10:57:58.274252 1795697 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.key
	I1123 10:57:58.274268 1795697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.crt with IP's: []
	I1123 10:57:58.476429 1795697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.crt ...
	I1123 10:57:58.476462 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.crt: {Name:mkee9096516671ab77910576bc03c62248bda2bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.476688 1795697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.key ...
	I1123 10:57:58.476706 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/client.key: {Name:mk9b1f1b88acd9142be294a0df14524c2c54f523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.476816 1795697 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key.2df6413d
	I1123 10:57:58.476836 1795697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt.2df6413d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 10:57:58.662905 1795697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt.2df6413d ...
	I1123 10:57:58.662940 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt.2df6413d: {Name:mk64545c82de12695ead4c4465b64ab1441d6148 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.663442 1795697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key.2df6413d ...
	I1123 10:57:58.663466 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key.2df6413d: {Name:mkc69de968d64cff294fd00a05314da14bf3a6bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:58.663581 1795697 certs.go:382] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt.2df6413d -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt
	I1123 10:57:58.663665 1795697 certs.go:386] copying /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key.2df6413d -> /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key
	I1123 10:57:58.663725 1795697 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.key
	I1123 10:57:58.663744 1795697 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.crt with IP's: []
	I1123 10:57:59.311710 1795697 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.crt ...
	I1123 10:57:59.311742 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.crt: {Name:mkf9f184bb31794e506028794b68db494704fc03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:59.311967 1795697 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.key ...
	I1123 10:57:59.311986 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.key: {Name:mkf3aa14db4c8b267911397ca446ad4d01c79151 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:57:59.312191 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 10:57:59.312238 1795697 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 10:57:59.312252 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:57:59.312282 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:57:59.312311 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:57:59.312342 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 10:57:59.312395 1795697 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:57:59.313070 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:57:59.330246 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:57:59.349672 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:57:59.369124 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:57:59.388761 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 10:57:59.408702 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:57:59.427629 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:57:59.446313 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/embed-certs-969029/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:57:59.465526 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 10:57:59.484539 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:57:59.504150 1795697 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 10:57:59.523804 1795697 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:57:59.537969 1795697 ssh_runner.go:195] Run: openssl version
	I1123 10:57:59.544762 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 10:57:59.553751 1795697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 10:57:59.557994 1795697 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 10:57:59.558115 1795697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 10:57:59.600699 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:57:59.609939 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:57:59.618453 1795697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:59.622742 1795697 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:59.622857 1795697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:57:59.675169 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:57:59.689423 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 10:57:59.700804 1795697 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 10:57:59.706163 1795697 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 10:57:59.706280 1795697 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 10:57:59.764704 1795697 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 10:57:59.778823 1795697 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:57:59.782896 1795697 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 10:57:59.783005 1795697 kubeadm.go:401] StartCluster: {Name:embed-certs-969029 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-969029 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:57:59.783111 1795697 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 10:57:59.783215 1795697 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:57:59.816341 1795697 cri.go:89] found id: ""
	I1123 10:57:59.816454 1795697 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:57:59.826602 1795697 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 10:57:59.834879 1795697 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 10:57:59.834980 1795697 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 10:57:59.845020 1795697 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 10:57:59.845087 1795697 kubeadm.go:158] found existing configuration files:
	
	I1123 10:57:59.845166 1795697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 10:57:59.853715 1795697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 10:57:59.853818 1795697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 10:57:59.861340 1795697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 10:57:59.869832 1795697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 10:57:59.869944 1795697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 10:57:59.877916 1795697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 10:57:59.886562 1795697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 10:57:59.886673 1795697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 10:57:59.894614 1795697 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 10:57:59.903210 1795697 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 10:57:59.903318 1795697 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 10:57:59.911044 1795697 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 10:57:59.963278 1795697 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 10:57:59.963711 1795697 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 10:57:59.992315 1795697 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 10:57:59.992473 1795697 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 10:57:59.992515 1795697 kubeadm.go:319] OS: Linux
	I1123 10:57:59.992564 1795697 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 10:57:59.992617 1795697 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 10:57:59.992668 1795697 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 10:57:59.992720 1795697 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 10:57:59.992771 1795697 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 10:57:59.992835 1795697 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 10:57:59.992885 1795697 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 10:57:59.992936 1795697 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 10:57:59.992988 1795697 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 10:58:00.134130 1795697 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 10:58:00.134362 1795697 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 10:58:00.134520 1795697 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 10:58:00.152742 1795697 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 10:57:59.708655 1792569 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:58:00.710816 1792569 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:58:00.713800 1792569 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:58:00.719748 1792569 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:58:00.158453 1795697 out.go:252]   - Generating certificates and keys ...
	I1123 10:58:00.158651 1795697 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 10:58:00.160157 1795697 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 10:58:00.880001 1795697 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 10:58:01.849562 1795697 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 10:58:02.223520 1795697 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 10:58:00.723242 1792569 out.go:252]   - Booting up control plane ...
	I1123 10:58:00.723352 1792569 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:58:00.723430 1792569 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:58:00.724296 1792569 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:58:00.759784 1792569 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:58:00.759893 1792569 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:58:00.769910 1792569 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:58:00.770010 1792569 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:58:00.770050 1792569 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:58:00.938641 1792569 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:58:00.938762 1792569 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:58:01.938884 1792569 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001744759s
	I1123 10:58:01.942322 1792569 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:58:01.942652 1792569 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1123 10:58:01.942749 1792569 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:58:01.942829 1792569 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:58:02.999540 1795697 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 10:58:03.115533 1795697 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 10:58:03.115684 1795697 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-969029 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:58:03.623538 1795697 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 10:58:03.623675 1795697 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-969029 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 10:58:04.403543 1795697 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 10:58:04.815542 1795697 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 10:58:05.099515 1795697 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 10:58:05.099596 1795697 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 10:58:06.184788 1795697 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 10:58:06.663895 1795697 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 10:58:06.957961 1795697 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 10:58:07.647273 1795697 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 10:58:08.192343 1795697 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 10:58:08.193468 1795697 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 10:58:08.196403 1795697 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 10:58:07.726887 1792569 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 5.784042217s
	I1123 10:58:10.919434 1792569 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 8.977030135s
	I1123 10:58:11.444064 1792569 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 9.501488698s
	I1123 10:58:11.473606 1792569 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:58:11.501493 1792569 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:58:11.530344 1792569 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:58:11.530780 1792569 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-055571 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:58:11.542495 1792569 kubeadm.go:319] [bootstrap-token] Using token: 2awhk1.t6olsn12sy2o68lm
	I1123 10:58:08.199731 1795697 out.go:252]   - Booting up control plane ...
	I1123 10:58:08.199835 1795697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 10:58:08.199946 1795697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 10:58:08.200817 1795697 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 10:58:08.226839 1795697 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 10:58:08.226951 1795697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 10:58:08.238318 1795697 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 10:58:08.238442 1795697 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 10:58:08.238482 1795697 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 10:58:08.464903 1795697 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 10:58:08.465024 1795697 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 10:58:10.967594 1795697 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 2.500905325s
	I1123 10:58:10.969133 1795697 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 10:58:10.969227 1795697 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 10:58:10.969316 1795697 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 10:58:10.969395 1795697 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 10:58:11.545733 1792569 out.go:252]   - Configuring RBAC rules ...
	I1123 10:58:11.545856 1792569 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:58:11.555324 1792569 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:58:11.564503 1792569 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:58:11.569410 1792569 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:58:11.574428 1792569 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:58:11.592193 1792569 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:58:11.850759 1792569 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:58:12.361239 1792569 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:58:12.855836 1792569 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:58:12.857310 1792569 kubeadm.go:319] 
	I1123 10:58:12.857395 1792569 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:58:12.857404 1792569 kubeadm.go:319] 
	I1123 10:58:12.857482 1792569 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:58:12.857486 1792569 kubeadm.go:319] 
	I1123 10:58:12.857510 1792569 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:58:12.857955 1792569 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:58:12.858019 1792569 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:58:12.858024 1792569 kubeadm.go:319] 
	I1123 10:58:12.858078 1792569 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:58:12.858082 1792569 kubeadm.go:319] 
	I1123 10:58:12.858129 1792569 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:58:12.858133 1792569 kubeadm.go:319] 
	I1123 10:58:12.858185 1792569 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:58:12.858260 1792569 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:58:12.858328 1792569 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:58:12.858331 1792569 kubeadm.go:319] 
	I1123 10:58:12.858626 1792569 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:58:12.858750 1792569 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:58:12.858771 1792569 kubeadm.go:319] 
	I1123 10:58:12.859054 1792569 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 2awhk1.t6olsn12sy2o68lm \
	I1123 10:58:12.859269 1792569 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 \
	I1123 10:58:12.859496 1792569 kubeadm.go:319] 	--control-plane 
	I1123 10:58:12.859533 1792569 kubeadm.go:319] 
	I1123 10:58:12.859807 1792569 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:58:12.859843 1792569 kubeadm.go:319] 
	I1123 10:58:12.860121 1792569 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 2awhk1.t6olsn12sy2o68lm \
	I1123 10:58:12.860445 1792569 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 
	I1123 10:58:12.879795 1792569 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:58:12.880191 1792569 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:58:12.880316 1792569 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:58:12.880328 1792569 cni.go:84] Creating CNI manager for ""
	I1123 10:58:12.880335 1792569 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:58:12.883457 1792569 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:58:12.886352 1792569 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:58:12.897399 1792569 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:58:12.897424 1792569 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:58:12.937711 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:58:13.558689 1792569 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:58:13.558818 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:13.558908 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-055571 minikube.k8s.io/updated_at=2025_11_23T10_58_13_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=no-preload-055571 minikube.k8s.io/primary=true
	I1123 10:58:13.887832 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:13.887902 1792569 ops.go:34] apiserver oom_adj: -16
	I1123 10:58:14.388399 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:14.888402 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:15.388249 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:15.888335 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:16.388370 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:16.887885 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:17.387892 1792569 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:17.551562 1792569 kubeadm.go:1114] duration metric: took 3.992788154s to wait for elevateKubeSystemPrivileges
	I1123 10:58:17.551601 1792569 kubeadm.go:403] duration metric: took 24.630076642s to StartCluster
	I1123 10:58:17.551628 1792569 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:58:17.551689 1792569 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:58:17.552359 1792569 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:58:17.552557 1792569 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:58:17.552644 1792569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:58:17.552871 1792569 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:58:17.552913 1792569 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:58:17.552973 1792569 addons.go:70] Setting storage-provisioner=true in profile "no-preload-055571"
	I1123 10:58:17.552987 1792569 addons.go:239] Setting addon storage-provisioner=true in "no-preload-055571"
	I1123 10:58:17.553017 1792569 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:58:17.553778 1792569 addons.go:70] Setting default-storageclass=true in profile "no-preload-055571"
	I1123 10:58:17.553803 1792569 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-055571"
	I1123 10:58:17.554047 1792569 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:58:17.554217 1792569 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:58:17.556255 1792569 out.go:179] * Verifying Kubernetes components...
	I1123 10:58:17.560465 1792569 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:58:17.586400 1792569 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:58:14.077262 1795697 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 3.10770246s
	I1123 10:58:16.784994 1795697 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 5.815747071s
	I1123 10:58:18.972269 1795697 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002967348s
	I1123 10:58:19.003844 1795697 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 10:58:19.027174 1795697 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 10:58:19.053834 1795697 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 10:58:19.054046 1795697 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-969029 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 10:58:19.069633 1795697 kubeadm.go:319] [bootstrap-token] Using token: kq6vm6.09lpm1jjzme9srb8
	I1123 10:58:17.591738 1792569 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:58:17.591762 1792569 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:58:17.591824 1792569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:58:17.598220 1792569 addons.go:239] Setting addon default-storageclass=true in "no-preload-055571"
	I1123 10:58:17.598257 1792569 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:58:17.598665 1792569 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:58:17.636275 1792569 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:58:17.636296 1792569 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:58:17.636360 1792569 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:58:17.647304 1792569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:58:17.674310 1792569 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35264 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:58:18.026058 1792569 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:58:18.032171 1792569 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:58:18.137552 1792569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:58:18.164955 1792569 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:58:19.190287 1792569 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.158078667s)
	I1123 10:58:19.190677 1792569 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.164575716s)
	I1123 10:58:19.190782 1792569 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 10:58:19.191747 1792569 node_ready.go:35] waiting up to 6m0s for node "no-preload-055571" to be "Ready" ...
	I1123 10:58:19.192094 1792569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.054515669s)
	I1123 10:58:19.696391 1792569 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-055571" context rescaled to 1 replicas
	I1123 10:58:19.754683 1792569 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.589692988s)
	I1123 10:58:19.757953 1792569 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 10:58:19.072843 1795697 out.go:252]   - Configuring RBAC rules ...
	I1123 10:58:19.072972 1795697 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 10:58:19.078435 1795697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 10:58:19.096054 1795697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 10:58:19.103935 1795697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 10:58:19.110233 1795697 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 10:58:19.114190 1795697 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 10:58:19.380116 1795697 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 10:58:19.853763 1795697 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 10:58:20.384200 1795697 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 10:58:20.384221 1795697 kubeadm.go:319] 
	I1123 10:58:20.384281 1795697 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 10:58:20.384292 1795697 kubeadm.go:319] 
	I1123 10:58:20.384369 1795697 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 10:58:20.384373 1795697 kubeadm.go:319] 
	I1123 10:58:20.384398 1795697 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 10:58:20.384457 1795697 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 10:58:20.384507 1795697 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 10:58:20.384511 1795697 kubeadm.go:319] 
	I1123 10:58:20.384565 1795697 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 10:58:20.384569 1795697 kubeadm.go:319] 
	I1123 10:58:20.384616 1795697 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 10:58:20.384620 1795697 kubeadm.go:319] 
	I1123 10:58:20.384671 1795697 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 10:58:20.384747 1795697 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 10:58:20.384815 1795697 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 10:58:20.384818 1795697 kubeadm.go:319] 
	I1123 10:58:20.384909 1795697 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 10:58:20.384986 1795697 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 10:58:20.384990 1795697 kubeadm.go:319] 
	I1123 10:58:20.385075 1795697 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token kq6vm6.09lpm1jjzme9srb8 \
	I1123 10:58:20.385187 1795697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 \
	I1123 10:58:20.385208 1795697 kubeadm.go:319] 	--control-plane 
	I1123 10:58:20.385212 1795697 kubeadm.go:319] 
	I1123 10:58:20.385297 1795697 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 10:58:20.385301 1795697 kubeadm.go:319] 
	I1123 10:58:20.385383 1795697 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token kq6vm6.09lpm1jjzme9srb8 \
	I1123 10:58:20.385485 1795697 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:89c61f9774debf2f88a0dc2c9b93b29185c1fae6b1036c7e525ca1a3f4568312 
	I1123 10:58:20.391080 1795697 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 10:58:20.391334 1795697 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 10:58:20.391439 1795697 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 10:58:20.391695 1795697 cni.go:84] Creating CNI manager for ""
	I1123 10:58:20.391747 1795697 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:58:20.397876 1795697 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 10:58:20.401083 1795697 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 10:58:20.408114 1795697 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 10:58:20.408133 1795697 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 10:58:20.451737 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 10:58:21.313782 1795697 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 10:58:21.313918 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:21.313996 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-969029 minikube.k8s.io/updated_at=2025_11_23T10_58_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53 minikube.k8s.io/name=embed-certs-969029 minikube.k8s.io/primary=true
	I1123 10:58:21.681382 1795697 ops.go:34] apiserver oom_adj: -16
	I1123 10:58:21.681502 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:22.181651 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:22.682511 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:19.761028 1792569 addons.go:530] duration metric: took 2.208104641s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1123 10:58:21.195305 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	W1123 10:58:23.694994 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	I1123 10:58:23.182431 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:23.681720 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:24.182384 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:24.681606 1795697 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 10:58:24.820728 1795697 kubeadm.go:1114] duration metric: took 3.506859118s to wait for elevateKubeSystemPrivileges
	I1123 10:58:24.820770 1795697 kubeadm.go:403] duration metric: took 25.037770717s to StartCluster
	I1123 10:58:24.820787 1795697 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:58:24.820846 1795697 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:58:24.822156 1795697 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:58:24.822380 1795697 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:58:24.822482 1795697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 10:58:24.822708 1795697 config.go:182] Loaded profile config "embed-certs-969029": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:58:24.822748 1795697 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:58:24.822810 1795697 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-969029"
	I1123 10:58:24.822824 1795697 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-969029"
	I1123 10:58:24.822846 1795697 host.go:66] Checking if "embed-certs-969029" exists ...
	I1123 10:58:24.823508 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:58:24.823758 1795697 addons.go:70] Setting default-storageclass=true in profile "embed-certs-969029"
	I1123 10:58:24.823781 1795697 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-969029"
	I1123 10:58:24.824064 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:58:24.825696 1795697 out.go:179] * Verifying Kubernetes components...
	I1123 10:58:24.829148 1795697 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:58:24.858324 1795697 addons.go:239] Setting addon default-storageclass=true in "embed-certs-969029"
	I1123 10:58:24.858366 1795697 host.go:66] Checking if "embed-certs-969029" exists ...
	I1123 10:58:24.858783 1795697 cli_runner.go:164] Run: docker container inspect embed-certs-969029 --format={{.State.Status}}
	I1123 10:58:24.858810 1795697 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:58:24.861785 1795697 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:58:24.861814 1795697 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:58:24.861887 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:58:24.896275 1795697 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:58:24.896294 1795697 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:58:24.896359 1795697 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-969029
	I1123 10:58:24.902497 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:58:24.930606 1795697 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35269 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/embed-certs-969029/id_rsa Username:docker}
	I1123 10:58:25.116648 1795697 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 10:58:25.155865 1795697 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:58:25.222564 1795697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:58:25.227175 1795697 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:58:25.753018 1795697 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 10:58:25.754206 1795697 node_ready.go:35] waiting up to 6m0s for node "embed-certs-969029" to be "Ready" ...
	I1123 10:58:26.247838 1795697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.025232871s)
	I1123 10:58:26.247883 1795697 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.020662733s)
	I1123 10:58:26.261999 1795697 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-969029" context rescaled to 1 replicas
	I1123 10:58:26.266987 1795697 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 10:58:26.269804 1795697 addons.go:530] duration metric: took 1.447044393s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 10:58:27.757157 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:25.696695 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	W1123 10:58:28.194618 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	W1123 10:58:29.757766 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:32.257448 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:30.195010 1792569 node_ready.go:57] node "no-preload-055571" has "Ready":"False" status (will retry)
	I1123 10:58:32.694577 1792569 node_ready.go:49] node "no-preload-055571" is "Ready"
	I1123 10:58:32.694605 1792569 node_ready.go:38] duration metric: took 13.502835455s for node "no-preload-055571" to be "Ready" ...
	I1123 10:58:32.694633 1792569 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:58:32.694690 1792569 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:58:32.713722 1792569 api_server.go:72] duration metric: took 15.161117804s to wait for apiserver process to appear ...
	I1123 10:58:32.713750 1792569 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:58:32.713768 1792569 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:58:32.722152 1792569 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 10:58:32.723232 1792569 api_server.go:141] control plane version: v1.34.1
	I1123 10:58:32.723259 1792569 api_server.go:131] duration metric: took 9.501898ms to wait for apiserver health ...
	I1123 10:58:32.723269 1792569 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:58:32.726622 1792569 system_pods.go:59] 8 kube-system pods found
	I1123 10:58:32.726687 1792569 system_pods.go:61] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:32.726695 1792569 system_pods.go:61] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:32.726701 1792569 system_pods.go:61] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:32.726718 1792569 system_pods.go:61] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:32.726723 1792569 system_pods.go:61] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:32.726734 1792569 system_pods.go:61] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:32.726738 1792569 system_pods.go:61] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:32.726743 1792569 system_pods.go:61] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:32.726753 1792569 system_pods.go:74] duration metric: took 3.479063ms to wait for pod list to return data ...
	I1123 10:58:32.726761 1792569 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:58:32.730173 1792569 default_sa.go:45] found service account: "default"
	I1123 10:58:32.730200 1792569 default_sa.go:55] duration metric: took 3.432581ms for default service account to be created ...
	I1123 10:58:32.730211 1792569 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:58:32.733322 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:32.733355 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:32.733361 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:32.733367 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:32.733398 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:32.733409 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:32.733420 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:32.733428 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:32.733434 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:32.733467 1792569 retry.go:31] will retry after 304.408436ms: missing components: kube-dns
	I1123 10:58:33.043545 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:33.043587 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:33.043594 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:33.043601 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:33.043624 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:33.043645 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:33.043650 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:33.043654 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:33.043660 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:33.043680 1792569 retry.go:31] will retry after 243.372863ms: missing components: kube-dns
	I1123 10:58:33.292875 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:33.292917 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:33.292924 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:33.292932 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:33.292936 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:33.292941 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:33.292945 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:33.292951 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:33.292962 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:33.292980 1792569 retry.go:31] will retry after 393.510988ms: missing components: kube-dns
	I1123 10:58:33.690917 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:33.690951 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:58:33.690957 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:33.690975 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:33.690980 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:33.690991 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:33.690995 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:33.691002 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:33.691008 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:58:33.691028 1792569 retry.go:31] will retry after 395.162605ms: missing components: kube-dns
	I1123 10:58:34.090690 1792569 system_pods.go:86] 8 kube-system pods found
	I1123 10:58:34.090724 1792569 system_pods.go:89] "coredns-66bc5c9577-b9hss" [dc7b7825-8cc7-46c1-97fa-1be6181d2214] Running
	I1123 10:58:34.090732 1792569 system_pods.go:89] "etcd-no-preload-055571" [5f2196dc-af5b-461c-af30-45f87505c443] Running
	I1123 10:58:34.090736 1792569 system_pods.go:89] "kindnet-4gsp7" [004a7b4a-a9c1-47c9-bf13-e04773eb1112] Running
	I1123 10:58:34.090741 1792569 system_pods.go:89] "kube-apiserver-no-preload-055571" [d9426032-1f88-456b-97ef-48c88ddd62bf] Running
	I1123 10:58:34.090745 1792569 system_pods.go:89] "kube-controller-manager-no-preload-055571" [6ffed5aa-9b87-45b5-b442-c674945b9e34] Running
	I1123 10:58:34.090774 1792569 system_pods.go:89] "kube-proxy-6fnf4" [2685bee3-d65c-4c1a-854d-2980a0e2bced] Running
	I1123 10:58:34.090791 1792569 system_pods.go:89] "kube-scheduler-no-preload-055571" [414f7ec8-ab18-4848-b16c-36564946a57c] Running
	I1123 10:58:34.090796 1792569 system_pods.go:89] "storage-provisioner" [38d8473b-9b2a-451c-bc60-96e2e7cd2a7a] Running
	I1123 10:58:34.090805 1792569 system_pods.go:126] duration metric: took 1.360587353s to wait for k8s-apps to be running ...
	I1123 10:58:34.090819 1792569 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:58:34.090888 1792569 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:58:34.105966 1792569 system_svc.go:56] duration metric: took 15.138003ms WaitForService to wait for kubelet
	I1123 10:58:34.106049 1792569 kubeadm.go:587] duration metric: took 16.553460815s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:58:34.106087 1792569 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:58:34.108802 1792569 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:58:34.108868 1792569 node_conditions.go:123] node cpu capacity is 2
	I1123 10:58:34.108888 1792569 node_conditions.go:105] duration metric: took 2.794336ms to run NodePressure ...
	I1123 10:58:34.108911 1792569 start.go:242] waiting for startup goroutines ...
	I1123 10:58:34.108920 1792569 start.go:247] waiting for cluster config update ...
	I1123 10:58:34.108934 1792569 start.go:256] writing updated cluster config ...
	I1123 10:58:34.109230 1792569 ssh_runner.go:195] Run: rm -f paused
	I1123 10:58:34.114678 1792569 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:58:34.118912 1792569 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b9hss" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.124043 1792569 pod_ready.go:94] pod "coredns-66bc5c9577-b9hss" is "Ready"
	I1123 10:58:34.124068 1792569 pod_ready.go:86] duration metric: took 5.132092ms for pod "coredns-66bc5c9577-b9hss" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.126325 1792569 pod_ready.go:83] waiting for pod "etcd-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.130964 1792569 pod_ready.go:94] pod "etcd-no-preload-055571" is "Ready"
	I1123 10:58:34.130991 1792569 pod_ready.go:86] duration metric: took 4.642841ms for pod "etcd-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.133398 1792569 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.137785 1792569 pod_ready.go:94] pod "kube-apiserver-no-preload-055571" is "Ready"
	I1123 10:58:34.137813 1792569 pod_ready.go:86] duration metric: took 4.391729ms for pod "kube-apiserver-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.139953 1792569 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.519757 1792569 pod_ready.go:94] pod "kube-controller-manager-no-preload-055571" is "Ready"
	I1123 10:58:34.519785 1792569 pod_ready.go:86] duration metric: took 379.805212ms for pod "kube-controller-manager-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:34.719622 1792569 pod_ready.go:83] waiting for pod "kube-proxy-6fnf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:35.119531 1792569 pod_ready.go:94] pod "kube-proxy-6fnf4" is "Ready"
	I1123 10:58:35.119562 1792569 pod_ready.go:86] duration metric: took 399.913949ms for pod "kube-proxy-6fnf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:35.320206 1792569 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:35.719247 1792569 pod_ready.go:94] pod "kube-scheduler-no-preload-055571" is "Ready"
	I1123 10:58:35.719276 1792569 pod_ready.go:86] duration metric: took 399.042609ms for pod "kube-scheduler-no-preload-055571" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:58:35.719290 1792569 pod_ready.go:40] duration metric: took 1.604573715s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:58:35.781193 1792569 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:58:35.784238 1792569 out.go:179] * Done! kubectl is now configured to use "no-preload-055571" cluster and "default" namespace by default
	W1123 10:58:34.257523 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:36.757002 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:38.757330 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:41.257031 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:43.257189 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:45.260609 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	W1123 10:58:47.756996 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	d3933774c00d4       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   3b993fd84303c       busybox                                     default
	141dfe2fe2c0f       138784d87c9c5       15 seconds ago      Running             coredns                   0                   d974ac78f94e9       coredns-66bc5c9577-b9hss                    kube-system
	f6ff857443149       66749159455b3       15 seconds ago      Running             storage-provisioner       0                   d1f48d3c6cebe       storage-provisioner                         kube-system
	9d45eab165f42       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   b17e3b0d4e95c       kindnet-4gsp7                               kube-system
	8b471e7e9bbda       05baa95f5142d       29 seconds ago      Running             kube-proxy                0                   f63480f595d4c       kube-proxy-6fnf4                            kube-system
	2f827144cf7fa       b5f57ec6b9867       45 seconds ago      Running             kube-scheduler            0                   d31fa2bd01cdf       kube-scheduler-no-preload-055571            kube-system
	14b800b67ad60       43911e833d64d       45 seconds ago      Running             kube-apiserver            0                   bdeb99787d352       kube-apiserver-no-preload-055571            kube-system
	6249f178fb08f       a1894772a478e       45 seconds ago      Running             etcd                      0                   91ccf8efa3085       etcd-no-preload-055571                      kube-system
	eab30623258b2       7eb2c6ff0c5a7       46 seconds ago      Running             kube-controller-manager   0                   768404e862924       kube-controller-manager-no-preload-055571   kube-system
	
	
	==> containerd <==
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.763682305Z" level=info msg="connecting to shim f6ff8574431495f4a49d9c3759b8049dfc4450cdb014fcd3928c598ca2c0da52" address="unix:///run/containerd/s/d08a8c552fa361ee5ea50b7dd1664ba292c4bdff815e226fa159fee0b232e032" protocol=ttrpc version=3
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.795830349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-b9hss,Uid:dc7b7825-8cc7-46c1-97fa-1be6181d2214,Namespace:kube-system,Attempt:0,} returns sandbox id \"d974ac78f94e9d193e6905d8824355b1ec638405eb0cda8e5e8ce71da22f74c3\""
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.809430740Z" level=info msg="CreateContainer within sandbox \"d974ac78f94e9d193e6905d8824355b1ec638405eb0cda8e5e8ce71da22f74c3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.823466945Z" level=info msg="Container 141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.836032202Z" level=info msg="CreateContainer within sandbox \"d974ac78f94e9d193e6905d8824355b1ec638405eb0cda8e5e8ce71da22f74c3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611\""
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.839402426Z" level=info msg="StartContainer for \"141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611\""
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.843417615Z" level=info msg="connecting to shim 141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611" address="unix:///run/containerd/s/fc599243d01e9e25cb12964bc3826e733bbfdbc246e95d7714a26ac91a1c2a90" protocol=ttrpc version=3
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.851437391Z" level=info msg="StartContainer for \"f6ff8574431495f4a49d9c3759b8049dfc4450cdb014fcd3928c598ca2c0da52\" returns successfully"
	Nov 23 10:58:32 no-preload-055571 containerd[756]: time="2025-11-23T10:58:32.961499446Z" level=info msg="StartContainer for \"141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611\" returns successfully"
	Nov 23 10:58:36 no-preload-055571 containerd[756]: time="2025-11-23T10:58:36.303485087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bd8008cd-cc28-45d9-8fa2-06099a099993,Namespace:default,Attempt:0,}"
	Nov 23 10:58:36 no-preload-055571 containerd[756]: time="2025-11-23T10:58:36.357106970Z" level=info msg="connecting to shim 3b993fd84303cb56d6227974a0d1ce802d8c685ce52d96d0a12fbf0599769fdb" address="unix:///run/containerd/s/da8fd57c9ae84815f9922fc211e36017ce1a9753536b85e1b44e9b080aee848c" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 10:58:36 no-preload-055571 containerd[756]: time="2025-11-23T10:58:36.425254202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:bd8008cd-cc28-45d9-8fa2-06099a099993,Namespace:default,Attempt:0,} returns sandbox id \"3b993fd84303cb56d6227974a0d1ce802d8c685ce52d96d0a12fbf0599769fdb\""
	Nov 23 10:58:36 no-preload-055571 containerd[756]: time="2025-11-23T10:58:36.429014143Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.683882941Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.686328912Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937187"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.688777049Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.692204437Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.692809000Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.263750772s"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.692921671Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.700835176Z" level=info msg="CreateContainer within sandbox \"3b993fd84303cb56d6227974a0d1ce802d8c685ce52d96d0a12fbf0599769fdb\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.716991058Z" level=info msg="Container d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.737815806Z" level=info msg="CreateContainer within sandbox \"3b993fd84303cb56d6227974a0d1ce802d8c685ce52d96d0a12fbf0599769fdb\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342\""
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.739010575Z" level=info msg="StartContainer for \"d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342\""
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.740311302Z" level=info msg="connecting to shim d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342" address="unix:///run/containerd/s/da8fd57c9ae84815f9922fc211e36017ce1a9753536b85e1b44e9b080aee848c" protocol=ttrpc version=3
	Nov 23 10:58:38 no-preload-055571 containerd[756]: time="2025-11-23T10:58:38.813544112Z" level=info msg="StartContainer for \"d3933774c00d41531266df66c608b81a1fd0b86b6e3ac5a971b58edeb0313342\" returns successfully"
	
	
	==> coredns [141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58191 - 5103 "HINFO IN 9040134774686138549.247589159770230753. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.023388887s
	
	
	==> describe nodes <==
	Name:               no-preload-055571
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-055571
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=no-preload-055571
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_58_13_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:58:09 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-055571
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:58:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:58:43 +0000   Sun, 23 Nov 2025 10:58:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:58:43 +0000   Sun, 23 Nov 2025 10:58:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:58:43 +0000   Sun, 23 Nov 2025 10:58:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:58:43 +0000   Sun, 23 Nov 2025 10:58:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-055571
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                6bebf923-fe25-46fc-b159-ca4a7a3f5ae9
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-b9hss                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-055571                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         36s
	  kube-system                 kindnet-4gsp7                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-055571             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-055571    200m (10%)    0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-6fnf4                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-055571             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   NodeHasSufficientMemory  47s (x8 over 47s)  kubelet          Node no-preload-055571 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    47s (x8 over 47s)  kubelet          Node no-preload-055571 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     47s (x7 over 47s)  kubelet          Node no-preload-055571 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  47s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  36s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-055571 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-055571 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-055571 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-055571 event: Registered Node no-preload-055571 in Controller
	  Normal   NodeReady                16s                kubelet          Node no-preload-055571 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:50] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [6249f178fb08fff7a76e05ef2091e7236bff165ee849beeba741138fd5d4e5d1] <==
	{"level":"warn","ts":"2025-11-23T10:58:06.412136Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.446019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.515630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.551601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.590270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.671980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.759457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.813858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.846638Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.887373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.908422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.967467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:06.993846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.037245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.133335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.167366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.241979Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57888","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.262336Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.330494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57924","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.360098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.403398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.491395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.539476Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.584339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:07.772051Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58068","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:58:48 up 11:41,  0 user,  load average: 3.75, 3.27, 2.90
	Linux no-preload-055571 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9d45eab165f426941b46cacf4c992c6d8d994ff8d83232faff07678871d4234f] <==
	I1123 10:58:21.928686       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:58:21.928949       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 10:58:21.929080       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:58:21.929097       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:58:21.929119       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:58:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:58:22.132779       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:58:22.132807       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:58:22.132823       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:58:22.134102       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 10:58:22.333711       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:58:22.333738       1 metrics.go:72] Registering metrics
	I1123 10:58:22.333795       1 controller.go:711] "Syncing nftables rules"
	I1123 10:58:32.140501       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:58:32.140540       1 main.go:301] handling current node
	I1123 10:58:42.131803       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 10:58:42.131846       1 main.go:301] handling current node
	
	
	==> kube-apiserver [14b800b67ad6052023ad76ace7ece6ce928c08d72e9876a0ba4ec63aa2fd2940] <==
	E1123 10:58:09.414765       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1123 10:58:09.417138       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:09.417350       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:58:09.441561       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:09.441873       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:58:09.475298       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:58:09.638531       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:58:09.768777       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:58:09.796733       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:58:09.797025       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:58:11.240756       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:58:11.305866       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:58:11.474151       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:58:11.502424       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 10:58:11.504381       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:58:11.517871       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:58:12.080599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:58:12.318687       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:58:12.359305       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:58:12.385093       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:58:17.444895       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:17.456167       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:17.957374       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:58:18.012353       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 10:58:45.288084       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:35866: use of closed network connection
	
	
	==> kube-controller-manager [eab30623258b276d71d20e0094aa488fe2eaf689d062eb457557742f0cf5e8dd] <==
	I1123 10:58:17.170326       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:58:17.170368       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:58:17.171025       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 10:58:17.171628       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 10:58:17.171811       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:58:17.171992       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-055571"
	I1123 10:58:17.172131       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 10:58:17.172928       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:58:17.178140       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 10:58:17.186933       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:58:17.186966       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 10:58:17.186974       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 10:58:17.188457       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 10:58:17.188587       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 10:58:17.190102       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 10:58:17.202115       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:58:17.215101       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:58:17.219848       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 10:58:17.220316       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 10:58:17.221415       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:58:17.223113       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:58:17.223130       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 10:58:17.224038       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 10:58:17.224877       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 10:58:37.175924       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [8b471e7e9bbda9cbfbea76934750632ac310334af415b16e44073b2e576eabc9] <==
	I1123 10:58:19.534534       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:58:19.678570       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:58:19.794940       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:58:19.794991       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 10:58:19.795065       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:58:19.879901       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:58:19.879971       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:58:19.896507       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:58:19.896947       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:58:19.896976       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:58:19.899125       1 config.go:200] "Starting service config controller"
	I1123 10:58:19.899136       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:58:19.899152       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:58:19.899156       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:58:19.899298       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:58:19.899307       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:58:19.904100       1 config.go:309] "Starting node config controller"
	I1123 10:58:19.904115       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:58:19.904122       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:58:19.999270       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 10:58:19.999345       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:58:19.999632       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [2f827144cf7fac652ccb74aef0066e57b21ecef01a8dcb73809e96022b694400] <==
	I1123 10:58:07.003458       1 serving.go:386] Generated self-signed cert in-memory
	W1123 10:58:10.876326       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 10:58:10.876515       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 10:58:10.876562       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 10:58:10.876596       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 10:58:10.908195       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 10:58:10.908461       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:58:10.911490       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 10:58:10.911586       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:58:10.911822       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 10:58:10.911639       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 10:58:10.920649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 10:58:12.315252       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:58:13 no-preload-055571 kubelet[2105]: E1123 10:58:13.756351    2105 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-055571\" already exists" pod="kube-system/kube-apiserver-no-preload-055571"
	Nov 23 10:58:13 no-preload-055571 kubelet[2105]: I1123 10:58:13.781341    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-055571" podStartSLOduration=1.781323142 podStartE2EDuration="1.781323142s" podCreationTimestamp="2025-11-23 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:13.764743613 +0000 UTC m=+1.526547442" watchObservedRunningTime="2025-11-23 10:58:13.781323142 +0000 UTC m=+1.543126947"
	Nov 23 10:58:13 no-preload-055571 kubelet[2105]: I1123 10:58:13.796728    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-055571" podStartSLOduration=1.79670892 podStartE2EDuration="1.79670892s" podCreationTimestamp="2025-11-23 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:13.782014383 +0000 UTC m=+1.543818180" watchObservedRunningTime="2025-11-23 10:58:13.79670892 +0000 UTC m=+1.558512725"
	Nov 23 10:58:13 no-preload-055571 kubelet[2105]: I1123 10:58:13.825998    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-055571" podStartSLOduration=1.8259800240000001 podStartE2EDuration="1.825980024s" podCreationTimestamp="2025-11-23 10:58:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:13.810504033 +0000 UTC m=+1.572307871" watchObservedRunningTime="2025-11-23 10:58:13.825980024 +0000 UTC m=+1.587783821"
	Nov 23 10:58:17 no-preload-055571 kubelet[2105]: I1123 10:58:17.134804    2105 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 10:58:17 no-preload-055571 kubelet[2105]: I1123 10:58:17.136842    2105 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.235373    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/004a7b4a-a9c1-47c9-bf13-e04773eb1112-lib-modules\") pod \"kindnet-4gsp7\" (UID: \"004a7b4a-a9c1-47c9-bf13-e04773eb1112\") " pod="kube-system/kindnet-4gsp7"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.235425    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/004a7b4a-a9c1-47c9-bf13-e04773eb1112-cni-cfg\") pod \"kindnet-4gsp7\" (UID: \"004a7b4a-a9c1-47c9-bf13-e04773eb1112\") " pod="kube-system/kindnet-4gsp7"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.235444    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/004a7b4a-a9c1-47c9-bf13-e04773eb1112-xtables-lock\") pod \"kindnet-4gsp7\" (UID: \"004a7b4a-a9c1-47c9-bf13-e04773eb1112\") " pod="kube-system/kindnet-4gsp7"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.235466    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgmx2\" (UniqueName: \"kubernetes.io/projected/004a7b4a-a9c1-47c9-bf13-e04773eb1112-kube-api-access-wgmx2\") pod \"kindnet-4gsp7\" (UID: \"004a7b4a-a9c1-47c9-bf13-e04773eb1112\") " pod="kube-system/kindnet-4gsp7"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.342299    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2685bee3-d65c-4c1a-854d-2980a0e2bced-kube-proxy\") pod \"kube-proxy-6fnf4\" (UID: \"2685bee3-d65c-4c1a-854d-2980a0e2bced\") " pod="kube-system/kube-proxy-6fnf4"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.342473    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2685bee3-d65c-4c1a-854d-2980a0e2bced-lib-modules\") pod \"kube-proxy-6fnf4\" (UID: \"2685bee3-d65c-4c1a-854d-2980a0e2bced\") " pod="kube-system/kube-proxy-6fnf4"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.342505    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dq6w\" (UniqueName: \"kubernetes.io/projected/2685bee3-d65c-4c1a-854d-2980a0e2bced-kube-api-access-5dq6w\") pod \"kube-proxy-6fnf4\" (UID: \"2685bee3-d65c-4c1a-854d-2980a0e2bced\") " pod="kube-system/kube-proxy-6fnf4"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.342529    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2685bee3-d65c-4c1a-854d-2980a0e2bced-xtables-lock\") pod \"kube-proxy-6fnf4\" (UID: \"2685bee3-d65c-4c1a-854d-2980a0e2bced\") " pod="kube-system/kube-proxy-6fnf4"
	Nov 23 10:58:18 no-preload-055571 kubelet[2105]: I1123 10:58:18.428319    2105 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:58:21 no-preload-055571 kubelet[2105]: I1123 10:58:21.809341    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-4gsp7" podStartSLOduration=1.011567775 podStartE2EDuration="3.809315334s" podCreationTimestamp="2025-11-23 10:58:18 +0000 UTC" firstStartedPulling="2025-11-23 10:58:18.797568119 +0000 UTC m=+6.559371916" lastFinishedPulling="2025-11-23 10:58:21.595315661 +0000 UTC m=+9.357119475" observedRunningTime="2025-11-23 10:58:21.80899837 +0000 UTC m=+9.570802167" watchObservedRunningTime="2025-11-23 10:58:21.809315334 +0000 UTC m=+9.571119131"
	Nov 23 10:58:21 no-preload-055571 kubelet[2105]: I1123 10:58:21.810153    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6fnf4" podStartSLOduration=3.81014043 podStartE2EDuration="3.81014043s" podCreationTimestamp="2025-11-23 10:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:19.790610375 +0000 UTC m=+7.552414213" watchObservedRunningTime="2025-11-23 10:58:21.81014043 +0000 UTC m=+9.571944235"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.227923    2105 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.379859    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gjc9\" (UniqueName: \"kubernetes.io/projected/dc7b7825-8cc7-46c1-97fa-1be6181d2214-kube-api-access-6gjc9\") pod \"coredns-66bc5c9577-b9hss\" (UID: \"dc7b7825-8cc7-46c1-97fa-1be6181d2214\") " pod="kube-system/coredns-66bc5c9577-b9hss"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.380132    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/38d8473b-9b2a-451c-bc60-96e2e7cd2a7a-tmp\") pod \"storage-provisioner\" (UID: \"38d8473b-9b2a-451c-bc60-96e2e7cd2a7a\") " pod="kube-system/storage-provisioner"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.380178    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7b7825-8cc7-46c1-97fa-1be6181d2214-config-volume\") pod \"coredns-66bc5c9577-b9hss\" (UID: \"dc7b7825-8cc7-46c1-97fa-1be6181d2214\") " pod="kube-system/coredns-66bc5c9577-b9hss"
	Nov 23 10:58:32 no-preload-055571 kubelet[2105]: I1123 10:58:32.380200    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l7zl\" (UniqueName: \"kubernetes.io/projected/38d8473b-9b2a-451c-bc60-96e2e7cd2a7a-kube-api-access-7l7zl\") pod \"storage-provisioner\" (UID: \"38d8473b-9b2a-451c-bc60-96e2e7cd2a7a\") " pod="kube-system/storage-provisioner"
	Nov 23 10:58:33 no-preload-055571 kubelet[2105]: I1123 10:58:33.845917    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b9hss" podStartSLOduration=15.845897702 podStartE2EDuration="15.845897702s" podCreationTimestamp="2025-11-23 10:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:33.833331378 +0000 UTC m=+21.595135183" watchObservedRunningTime="2025-11-23 10:58:33.845897702 +0000 UTC m=+21.607701499"
	Nov 23 10:58:33 no-preload-055571 kubelet[2105]: I1123 10:58:33.862719    2105 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.862698524 podStartE2EDuration="14.862698524s" podCreationTimestamp="2025-11-23 10:58:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:33.847427126 +0000 UTC m=+21.609230931" watchObservedRunningTime="2025-11-23 10:58:33.862698524 +0000 UTC m=+21.624502321"
	Nov 23 10:58:36 no-preload-055571 kubelet[2105]: I1123 10:58:36.104813    2105 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-965kh\" (UniqueName: \"kubernetes.io/projected/bd8008cd-cc28-45d9-8fa2-06099a099993-kube-api-access-965kh\") pod \"busybox\" (UID: \"bd8008cd-cc28-45d9-8fa2-06099a099993\") " pod="default/busybox"
	
	
	==> storage-provisioner [f6ff8574431495f4a49d9c3759b8049dfc4450cdb014fcd3928c598ca2c0da52] <==
	I1123 10:58:32.867460       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:58:32.869511       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:32.875505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:58:32.875647       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:58:32.876546       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-055571_82bd6546-ee7f-445c-b100-d2f0794b24b9!
	I1123 10:58:32.884834       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c452823d-f421-47b4-ba83-5334871b3f15", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-055571_82bd6546-ee7f-445c-b100-d2f0794b24b9 became leader
	W1123 10:58:32.887915       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:32.897272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:58:32.977519       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-055571_82bd6546-ee7f-445c-b100-d2f0794b24b9!
	W1123 10:58:34.900783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:34.905678       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:36.908985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:36.913795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:38.916703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:38.921454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:40.924462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:40.929100       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:42.933886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:42.940562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:44.943822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:44.948331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:46.954706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:46.963321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:48.966732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:58:48.972135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-055571 -n no-preload-055571
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-055571 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (13.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-969029 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [976d8660-27e9-4d64-bcea-5f2857bfbd4f] Pending
helpers_test.go:352: "busybox" [976d8660-27e9-4d64-bcea-5f2857bfbd4f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [976d8660-27e9-4d64-bcea-5f2857bfbd4f] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.003327837s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-969029 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-969029
helpers_test.go:243: (dbg) docker inspect embed-certs-969029:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de",
	        "Created": "2025-11-23T10:57:50.842484184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1796233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:57:51.032485172Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de/hosts",
	        "LogPath": "/var/lib/docker/containers/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de-json.log",
	        "Name": "/embed-certs-969029",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-969029:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-969029",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de",
	                "LowerDir": "/var/lib/docker/overlay2/97f5a9880020c43a7a82b7f70bd6dd89e6b4a203d995cd2567245240cbae9ffc-init/diff:/var/lib/docker/overlay2/fe0bef51c968206096993e9a75db2143cd9cd74d56696a257291ce63f851a2d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/97f5a9880020c43a7a82b7f70bd6dd89e6b4a203d995cd2567245240cbae9ffc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/97f5a9880020c43a7a82b7f70bd6dd89e6b4a203d995cd2567245240cbae9ffc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/97f5a9880020c43a7a82b7f70bd6dd89e6b4a203d995cd2567245240cbae9ffc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-969029",
	                "Source": "/var/lib/docker/volumes/embed-certs-969029/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-969029",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-969029",
	                "name.minikube.sigs.k8s.io": "embed-certs-969029",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50021cc80909e5d261e7d6437ae7441cc6b5b829f27cd62f1598ce5e3268821f",
	            "SandboxKey": "/var/run/docker/netns/50021cc80909",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35269"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35270"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35273"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35271"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35272"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-969029": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:5f:37:4f:d4:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6f58dbb248072889615a2f552ac4d5890af6f4b7a41194d40d66ae581236eb94",
	                    "EndpointID": "8dc7ab379bf3b18e6b137570b8eab51a23befb49b0d67d2ea9b90031e0f21ac5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-969029",
	                        "d3cb17036a8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-969029 -n embed-certs-969029
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-969029 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-969029 logs -n 25: (1.391481157s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-679101   │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ force-systemd-env-479166 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-479166 │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p force-systemd-env-479166                                                                                                                                                                                                                         │ force-systemd-env-479166 │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p cert-options-501705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-501705      │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ cert-options-501705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-501705      │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ -p cert-options-501705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-501705      │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p cert-options-501705                                                                                                                                                                                                                              │ cert-options-501705      │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:55 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:55 UTC │ 23 Nov 25 10:55 UTC │
	│ stop    │ -p old-k8s-version-162750 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:55 UTC │ 23 Nov 25 10:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:56 UTC │ 23 Nov 25 10:56 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:56 UTC │ 23 Nov 25 10:57 UTC │
	│ image   │ old-k8s-version-162750 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ pause   │ -p old-k8s-version-162750 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ unpause │ -p old-k8s-version-162750 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p old-k8s-version-162750                                                                                                                                                                                                                           │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p old-k8s-version-162750                                                                                                                                                                                                                           │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ start   │ -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:58 UTC │
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-679101   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p cert-expiration-679101                                                                                                                                                                                                                           │ cert-expiration-679101   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ start   │ -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-969029       │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-055571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:58 UTC │ 23 Nov 25 10:58 UTC │
	│ stop    │ -p no-preload-055571 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:58 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable dashboard -p no-preload-055571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ start   │ -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:59:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:59:03.121926 1801378 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:59:03.122055 1801378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:59:03.122066 1801378 out.go:374] Setting ErrFile to fd 2...
	I1123 10:59:03.122071 1801378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:59:03.122312 1801378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:59:03.122661 1801378 out.go:368] Setting JSON to false
	I1123 10:59:03.123644 1801378 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":42088,"bootTime":1763853455,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:59:03.123715 1801378 start.go:143] virtualization:  
	I1123 10:59:03.129025 1801378 out.go:179] * [no-preload-055571] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:59:03.132201 1801378 notify.go:221] Checking for updates...
	I1123 10:59:03.132689 1801378 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:59:03.135751 1801378 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:59:03.138825 1801378 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:59:03.141763 1801378 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:59:03.144829 1801378 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:59:03.147794 1801378 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:59:03.151248 1801378 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:59:03.151861 1801378 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:59:03.196210 1801378 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:59:03.196323 1801378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:59:03.246324 1801378 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:59:03.237313044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:59:03.246430 1801378 docker.go:319] overlay module found
	I1123 10:59:03.249572 1801378 out.go:179] * Using the docker driver based on existing profile
	I1123 10:59:03.252349 1801378 start.go:309] selected driver: docker
	I1123 10:59:03.252368 1801378 start.go:927] validating driver "docker" against &{Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:59:03.252471 1801378 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:59:03.253163 1801378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:59:03.315892 1801378 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:59:03.307405315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:59:03.316266 1801378 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:59:03.316299 1801378 cni.go:84] Creating CNI manager for ""
	I1123 10:59:03.316354 1801378 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:59:03.316396 1801378 start.go:353] cluster config:
	{Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:59:03.319453 1801378 out.go:179] * Starting "no-preload-055571" primary control-plane node in "no-preload-055571" cluster
	I1123 10:59:03.322189 1801378 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 10:59:03.324980 1801378 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:59:03.327707 1801378 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:59:03.327781 1801378 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:59:03.327849 1801378 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/config.json ...
	I1123 10:59:03.328125 1801378 cache.go:107] acquiring lock: {Name:mka2b8a0dc7618a4186b53c7030334ba2743726d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328201 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 10:59:03.328212 1801378 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.31µs
	I1123 10:59:03.328224 1801378 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 10:59:03.328237 1801378 cache.go:107] acquiring lock: {Name:mked64c05cbe29983e52beb0ede9990c8342d2c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328269 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 10:59:03.328283 1801378 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 47.892µs
	I1123 10:59:03.328290 1801378 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 10:59:03.328300 1801378 cache.go:107] acquiring lock: {Name:mkf03376969352d5c6ab837edabe530dd846c7fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328331 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 10:59:03.328336 1801378 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 37.644µs
	I1123 10:59:03.328342 1801378 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 10:59:03.328355 1801378 cache.go:107] acquiring lock: {Name:mke7eca088924ba5278c3e6f5a538cf5ffb363c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328380 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 10:59:03.328385 1801378 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.908µs
	I1123 10:59:03.328391 1801378 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 10:59:03.328399 1801378 cache.go:107] acquiring lock: {Name:mk02bc90fcdcdd13d6b6d27fb406e2594c2c7586 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328425 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 10:59:03.328430 1801378 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 31.917µs
	I1123 10:59:03.328436 1801378 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 10:59:03.328418 1801378 cache.go:107] acquiring lock: {Name:mkf752427a5200a1894b6bd7cc1653f9055ce950 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328444 1801378 cache.go:107] acquiring lock: {Name:mk0ec107985b2e86560d8614a0f0ddd0531b9cb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328466 1801378 cache.go:107] acquiring lock: {Name:mk09ce98fa221f42abb869ee72c86d69dd16d276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328493 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 10:59:03.328497 1801378 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.508µs
	I1123 10:59:03.328504 1801378 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 10:59:03.328499 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 10:59:03.328513 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 10:59:03.328522 1801378 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 110.603µs
	I1123 10:59:03.328528 1801378 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 10:59:03.328513 1801378 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 70.029µs
	I1123 10:59:03.328535 1801378 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 10:59:03.328541 1801378 cache.go:87] Successfully saved all images to host disk.
	I1123 10:59:03.346738 1801378 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:59:03.346761 1801378 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:59:03.346777 1801378 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:59:03.346807 1801378 start.go:360] acquireMachinesLock for no-preload-055571: {Name:mk3ea9b9eaa721e5203c43f3e725422cc94c48a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.346865 1801378 start.go:364] duration metric: took 37.316µs to acquireMachinesLock for "no-preload-055571"
	I1123 10:59:03.346891 1801378 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:59:03.346902 1801378 fix.go:54] fixHost starting: 
	I1123 10:59:03.347161 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:03.364311 1801378 fix.go:112] recreateIfNeeded on no-preload-055571: state=Stopped err=<nil>
	W1123 10:59:03.364360 1801378 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:59:04.257574 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	I1123 10:59:06.259468 1795697 node_ready.go:49] node "embed-certs-969029" is "Ready"
	I1123 10:59:06.259502 1795697 node_ready.go:38] duration metric: took 40.505264475s for node "embed-certs-969029" to be "Ready" ...
	I1123 10:59:06.259520 1795697 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:59:06.259580 1795697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:59:06.279807 1795697 api_server.go:72] duration metric: took 41.457383999s to wait for apiserver process to appear ...
	I1123 10:59:06.279836 1795697 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:59:06.279856 1795697 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:59:06.288082 1795697 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:59:06.289059 1795697 api_server.go:141] control plane version: v1.34.1
	I1123 10:59:06.289084 1795697 api_server.go:131] duration metric: took 9.241619ms to wait for apiserver health ...
	I1123 10:59:06.289093 1795697 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:59:06.295051 1795697 system_pods.go:59] 8 kube-system pods found
	I1123 10:59:06.295086 1795697 system_pods.go:61] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending
	I1123 10:59:06.295093 1795697 system_pods.go:61] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:06.295098 1795697 system_pods.go:61] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:06.295102 1795697 system_pods.go:61] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:06.295106 1795697 system_pods.go:61] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:06.295110 1795697 system_pods.go:61] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:06.295114 1795697 system_pods.go:61] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:06.295118 1795697 system_pods.go:61] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending
	I1123 10:59:06.295124 1795697 system_pods.go:74] duration metric: took 6.024698ms to wait for pod list to return data ...
	I1123 10:59:06.295131 1795697 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:59:06.297744 1795697 default_sa.go:45] found service account: "default"
	I1123 10:59:06.297771 1795697 default_sa.go:55] duration metric: took 2.633921ms for default service account to be created ...
	I1123 10:59:06.297781 1795697 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:59:06.300506 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:06.300532 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending
	I1123 10:59:06.300538 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:06.300552 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:06.300558 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:06.300563 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:06.300567 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:06.300571 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:06.300575 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending
	I1123 10:59:06.300602 1795697 retry.go:31] will retry after 251.978529ms: missing components: kube-dns
	I1123 10:59:06.556277 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:06.556314 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:59:06.556320 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:06.556327 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:06.556336 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:06.556341 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:06.556345 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:06.556349 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:06.556355 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:59:06.556368 1795697 retry.go:31] will retry after 326.704802ms: missing components: kube-dns
	I1123 10:59:06.888108 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:06.888143 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:59:06.888150 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:06.888156 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:06.888161 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:06.888166 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:06.888171 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:06.888176 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:06.888181 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:59:06.888200 1795697 retry.go:31] will retry after 482.718586ms: missing components: kube-dns
	I1123 10:59:07.380060 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:07.380091 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:59:07.380098 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:07.380103 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:07.380108 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:07.380114 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:07.380118 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:07.380121 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:07.380127 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:59:07.380141 1795697 retry.go:31] will retry after 524.899428ms: missing components: kube-dns
	I1123 10:59:03.367692 1801378 out.go:252] * Restarting existing docker container for "no-preload-055571" ...
	I1123 10:59:03.367780 1801378 cli_runner.go:164] Run: docker start no-preload-055571
	I1123 10:59:03.639681 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:03.666085 1801378 kic.go:430] container "no-preload-055571" state is running.
	I1123 10:59:03.666481 1801378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-055571
	I1123 10:59:03.688932 1801378 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/config.json ...
	I1123 10:59:03.689160 1801378 machine.go:94] provisionDockerMachine start ...
	I1123 10:59:03.689234 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:03.710099 1801378 main.go:143] libmachine: Using SSH client type: native
	I1123 10:59:03.710419 1801378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35274 <nil> <nil>}
	I1123 10:59:03.710427 1801378 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:59:03.711356 1801378 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:59:06.875581 1801378 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-055571
	
	I1123 10:59:06.875605 1801378 ubuntu.go:182] provisioning hostname "no-preload-055571"
	I1123 10:59:06.875671 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:06.900928 1801378 main.go:143] libmachine: Using SSH client type: native
	I1123 10:59:06.901237 1801378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35274 <nil> <nil>}
	I1123 10:59:06.901248 1801378 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-055571 && echo "no-preload-055571" | sudo tee /etc/hostname
	I1123 10:59:07.087756 1801378 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-055571
	
	I1123 10:59:07.087849 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.105470 1801378 main.go:143] libmachine: Using SSH client type: native
	I1123 10:59:07.105795 1801378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35274 <nil> <nil>}
	I1123 10:59:07.105819 1801378 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-055571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-055571/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-055571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:59:07.255239 1801378 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:59:07.255268 1801378 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-1582671/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-1582671/.minikube}
	I1123 10:59:07.255287 1801378 ubuntu.go:190] setting up certificates
	I1123 10:59:07.255296 1801378 provision.go:84] configureAuth start
	I1123 10:59:07.255362 1801378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-055571
	I1123 10:59:07.277246 1801378 provision.go:143] copyHostCerts
	I1123 10:59:07.277327 1801378 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem, removing ...
	I1123 10:59:07.277340 1801378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem
	I1123 10:59:07.277418 1801378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem (1078 bytes)
	I1123 10:59:07.277527 1801378 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem, removing ...
	I1123 10:59:07.277537 1801378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem
	I1123 10:59:07.277564 1801378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem (1123 bytes)
	I1123 10:59:07.277629 1801378 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem, removing ...
	I1123 10:59:07.277642 1801378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem
	I1123 10:59:07.277667 1801378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem (1675 bytes)
	I1123 10:59:07.277726 1801378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem org=jenkins.no-preload-055571 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-055571]
	I1123 10:59:07.503680 1801378 provision.go:177] copyRemoteCerts
	I1123 10:59:07.503753 1801378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:59:07.503801 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.522336 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:07.630920 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:59:07.648953 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:59:07.666459 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:59:07.685616 1801378 provision.go:87] duration metric: took 430.297483ms to configureAuth
	I1123 10:59:07.685699 1801378 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:59:07.685939 1801378 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:59:07.685956 1801378 machine.go:97] duration metric: took 3.996782916s to provisionDockerMachine
	I1123 10:59:07.685965 1801378 start.go:293] postStartSetup for "no-preload-055571" (driver="docker")
	I1123 10:59:07.685988 1801378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:59:07.686046 1801378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:59:07.686117 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.703283 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:07.810843 1801378 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:59:07.814039 1801378 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:59:07.814066 1801378 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:59:07.814077 1801378 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/addons for local assets ...
	I1123 10:59:07.814127 1801378 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/files for local assets ...
	I1123 10:59:07.814212 1801378 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem -> 15845322.pem in /etc/ssl/certs
	I1123 10:59:07.814317 1801378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:59:07.821461 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:59:07.838573 1801378 start.go:296] duration metric: took 152.592595ms for postStartSetup
	I1123 10:59:07.838673 1801378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:59:07.838717 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.855946 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:07.959985 1801378 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:59:07.964454 1801378 fix.go:56] duration metric: took 4.61754418s for fixHost
	I1123 10:59:07.964478 1801378 start.go:83] releasing machines lock for "no-preload-055571", held for 4.61759894s
	I1123 10:59:07.964547 1801378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-055571
	I1123 10:59:07.981265 1801378 ssh_runner.go:195] Run: cat /version.json
	I1123 10:59:07.981337 1801378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:59:07.981354 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.981390 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:08.002941 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:08.007232 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:08.217513 1801378 ssh_runner.go:195] Run: systemctl --version
	I1123 10:59:08.223881 1801378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:59:08.228203 1801378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:59:08.228302 1801378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:59:08.238525 1801378 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:59:08.238553 1801378 start.go:496] detecting cgroup driver to use...
	I1123 10:59:08.238603 1801378 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:59:08.238657 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 10:59:08.256450 1801378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 10:59:08.276377 1801378 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:59:08.276469 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:59:08.292185 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:59:08.305378 1801378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:59:08.420141 1801378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:59:08.543693 1801378 docker.go:234] disabling docker service ...
	I1123 10:59:08.543831 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:59:08.560569 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:59:08.573247 1801378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:59:08.695349 1801378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:59:08.822800 1801378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:59:08.838750 1801378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:59:08.855970 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 10:59:08.866019 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 10:59:08.875071 1801378 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 10:59:08.875240 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 10:59:08.884630 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:59:08.893099 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 10:59:08.901740 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:59:08.910017 1801378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:59:08.917595 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 10:59:08.925913 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 10:59:08.936081 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 10:59:08.945101 1801378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:59:08.953410 1801378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:59:08.960678 1801378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:59:09.080393 1801378 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 10:59:09.232443 1801378 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 10:59:09.232596 1801378 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 10:59:09.241856 1801378 start.go:564] Will wait 60s for crictl version
	I1123 10:59:09.241962 1801378 ssh_runner.go:195] Run: which crictl
	I1123 10:59:09.245894 1801378 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:59:09.277292 1801378 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 10:59:09.277444 1801378 ssh_runner.go:195] Run: containerd --version
	I1123 10:59:09.298858 1801378 ssh_runner.go:195] Run: containerd --version
	I1123 10:59:09.323188 1801378 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 10:59:07.909619 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:07.909647 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Running
	I1123 10:59:07.909654 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:07.909658 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:07.909663 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:07.909670 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:07.909674 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:07.909678 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:07.909682 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Running
	I1123 10:59:07.909689 1795697 system_pods.go:126] duration metric: took 1.611903084s to wait for k8s-apps to be running ...
	I1123 10:59:07.909695 1795697 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:59:07.909745 1795697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:59:07.923682 1795697 system_svc.go:56] duration metric: took 13.97683ms WaitForService to wait for kubelet
	I1123 10:59:07.923707 1795697 kubeadm.go:587] duration metric: took 43.101288176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:59:07.923725 1795697 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:59:07.926826 1795697 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:59:07.926901 1795697 node_conditions.go:123] node cpu capacity is 2
	I1123 10:59:07.926929 1795697 node_conditions.go:105] duration metric: took 3.197632ms to run NodePressure ...
	I1123 10:59:07.926971 1795697 start.go:242] waiting for startup goroutines ...
	I1123 10:59:07.926994 1795697 start.go:247] waiting for cluster config update ...
	I1123 10:59:07.927017 1795697 start.go:256] writing updated cluster config ...
	I1123 10:59:07.927401 1795697 ssh_runner.go:195] Run: rm -f paused
	I1123 10:59:07.930837 1795697 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:59:07.934456 1795697 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pgvtk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.938849 1795697 pod_ready.go:94] pod "coredns-66bc5c9577-pgvtk" is "Ready"
	I1123 10:59:07.938917 1795697 pod_ready.go:86] duration metric: took 4.437028ms for pod "coredns-66bc5c9577-pgvtk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.940947 1795697 pod_ready.go:83] waiting for pod "etcd-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.945119 1795697 pod_ready.go:94] pod "etcd-embed-certs-969029" is "Ready"
	I1123 10:59:07.945148 1795697 pod_ready.go:86] duration metric: took 4.178761ms for pod "etcd-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.947075 1795697 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.951251 1795697 pod_ready.go:94] pod "kube-apiserver-embed-certs-969029" is "Ready"
	I1123 10:59:07.951277 1795697 pod_ready.go:86] duration metric: took 4.178015ms for pod "kube-apiserver-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.953259 1795697 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:08.334923 1795697 pod_ready.go:94] pod "kube-controller-manager-embed-certs-969029" is "Ready"
	I1123 10:59:08.334954 1795697 pod_ready.go:86] duration metric: took 381.674182ms for pod "kube-controller-manager-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:08.535462 1795697 pod_ready.go:83] waiting for pod "kube-proxy-dsz2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:08.935107 1795697 pod_ready.go:94] pod "kube-proxy-dsz2q" is "Ready"
	I1123 10:59:08.935129 1795697 pod_ready.go:86] duration metric: took 399.646474ms for pod "kube-proxy-dsz2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:09.135882 1795697 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:09.535348 1795697 pod_ready.go:94] pod "kube-scheduler-embed-certs-969029" is "Ready"
	I1123 10:59:09.535372 1795697 pod_ready.go:86] duration metric: took 399.466997ms for pod "kube-scheduler-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:09.535385 1795697 pod_ready.go:40] duration metric: took 1.604522906s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:59:09.615514 1795697 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:59:09.619373 1795697 out.go:179] * Done! kubectl is now configured to use "embed-certs-969029" cluster and "default" namespace by default
	I1123 10:59:09.326166 1801378 cli_runner.go:164] Run: docker network inspect no-preload-055571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:59:09.344982 1801378 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:59:09.348838 1801378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:59:09.358706 1801378 kubeadm.go:884] updating cluster {Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:59:09.358851 1801378 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:59:09.358906 1801378 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:59:09.384809 1801378 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:59:09.384833 1801378 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:59:09.384841 1801378 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 10:59:09.384941 1801378 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-055571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:59:09.385013 1801378 ssh_runner.go:195] Run: sudo crictl info
	I1123 10:59:09.411314 1801378 cni.go:84] Creating CNI manager for ""
	I1123 10:59:09.411344 1801378 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:59:09.411364 1801378 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:59:09.411388 1801378 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-055571 NodeName:no-preload-055571 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:59:09.411501 1801378 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-055571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:59:09.411572 1801378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:59:09.420798 1801378 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:59:09.420920 1801378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:59:09.428155 1801378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 10:59:09.441361 1801378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:59:09.456410 1801378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 10:59:09.469357 1801378 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:59:09.472985 1801378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:59:09.482113 1801378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:59:09.613780 1801378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:59:09.633064 1801378 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571 for IP: 192.168.85.2
	I1123 10:59:09.633086 1801378 certs.go:195] generating shared ca certs ...
	I1123 10:59:09.633102 1801378 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:59:09.633239 1801378 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 10:59:09.633285 1801378 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 10:59:09.633297 1801378 certs.go:257] generating profile certs ...
	I1123 10:59:09.633428 1801378 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.key
	I1123 10:59:09.633516 1801378 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb
	I1123 10:59:09.633563 1801378 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key
	I1123 10:59:09.633674 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 10:59:09.633709 1801378 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 10:59:09.633721 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:59:09.633750 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:59:09.633779 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:59:09.633807 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 10:59:09.633855 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:59:09.634581 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:59:09.689523 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:59:09.739831 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:59:09.811371 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:59:09.894159 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:59:09.936774 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:59:09.967536 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:59:09.985896 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:59:10.009672 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 10:59:10.031516 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:59:10.061642 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 10:59:10.082255 1801378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:59:10.110140 1801378 ssh_runner.go:195] Run: openssl version
	I1123 10:59:10.119842 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:59:10.130187 1801378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:59:10.135373 1801378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:59:10.135443 1801378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:59:10.183928 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:59:10.192197 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 10:59:10.200632 1801378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 10:59:10.204233 1801378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 10:59:10.204295 1801378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 10:59:10.248721 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 10:59:10.257302 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 10:59:10.265452 1801378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 10:59:10.270002 1801378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 10:59:10.270063 1801378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 10:59:10.312949 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:59:10.321054 1801378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:59:10.327382 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:59:10.402943 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:59:10.463434 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:59:10.524730 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:59:10.604457 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:59:10.692754 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:59:10.749103 1801378 kubeadm.go:401] StartCluster: {Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:59:10.749206 1801378 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 10:59:10.749288 1801378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:59:10.818676 1801378 cri.go:89] found id: "d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83"
	I1123 10:59:10.818709 1801378 cri.go:89] found id: "141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611"
	I1123 10:59:10.818715 1801378 cri.go:89] found id: "f6ff8574431495f4a49d9c3759b8049dfc4450cdb014fcd3928c598ca2c0da52"
	I1123 10:59:10.818726 1801378 cri.go:89] found id: "9d45eab165f426941b46cacf4c992c6d8d994ff8d83232faff07678871d4234f"
	I1123 10:59:10.818729 1801378 cri.go:89] found id: "8b471e7e9bbda9cbfbea76934750632ac310334af415b16e44073b2e576eabc9"
	I1123 10:59:10.818732 1801378 cri.go:89] found id: "2f827144cf7fac652ccb74aef0066e57b21ecef01a8dcb73809e96022b694400"
	I1123 10:59:10.818736 1801378 cri.go:89] found id: "14b800b67ad6052023ad76ace7ece6ce928c08d72e9876a0ba4ec63aa2fd2940"
	I1123 10:59:10.818738 1801378 cri.go:89] found id: "6249f178fb08fff7a76e05ef2091e7236bff165ee849beeba741138fd5d4e5d1"
	I1123 10:59:10.818742 1801378 cri.go:89] found id: "eab30623258b276d71d20e0094aa488fe2eaf689d062eb457557742f0cf5e8dd"
	I1123 10:59:10.818750 1801378 cri.go:89] found id: ""
	I1123 10:59:10.818819 1801378 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 10:59:10.857473 1801378 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664","pid":932,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664/rootfs","created":"2025-11-23T10:59:10.698789689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-055571_ce9e5a9dcaff1330f74c9cdff3f1a808","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ce9e5a9dcaff1330f74c9cdff3f1a808"},"owner":"root"},{"ociVersion":"1.2.1","id":"b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0","pid":940,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0/rootfs","created":"2025-11-23T10:59:10.691440306Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0","io.kubernetes.c
ri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-055571_19263a60045981406ff42a29aacfbe1d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"19263a60045981406ff42a29aacfbe1d"},"owner":"root"},{"ociVersion":"1.2.1","id":"b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396","pid":877,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396/rootfs","created":"2025-11-23T10:59:10.599997408Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-
quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-055571_1009b601f1db86628e469db0a601cbf6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1009b601f1db86628e469db0a601cbf6"},"owner":"root"},{"ociVersion":"1.2.1","id":"d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83","pid":993,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83/rootfs","created":"2025-11-23T10:59:10.836245069Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.cont
ainer-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396","io.kubernetes.cri.sandbox-name":"etcd-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1009b601f1db86628e469db0a601cbf6"},"owner":"root"},{"ociVersion":"1.2.1","id":"ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22","pid":977,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22/rootfs","created":"2025-11-23T10:59:10.780522368Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quo
ta":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-055571_85fe8bb1853930358150671c0b7a1d0a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"85fe8bb1853930358150671c0b7a1d0a"},"owner":"root"}]
	I1123 10:59:10.857640 1801378 cri.go:126] list returned 5 containers
	I1123 10:59:10.857656 1801378 cri.go:129] container: {ID:5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664 Status:running}
	I1123 10:59:10.857675 1801378 cri.go:131] skipping 5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664 - not in ps
	I1123 10:59:10.857680 1801378 cri.go:129] container: {ID:b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0 Status:running}
	I1123 10:59:10.857685 1801378 cri.go:131] skipping b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0 - not in ps
	I1123 10:59:10.857696 1801378 cri.go:129] container: {ID:b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396 Status:running}
	I1123 10:59:10.857700 1801378 cri.go:131] skipping b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396 - not in ps
	I1123 10:59:10.857703 1801378 cri.go:129] container: {ID:d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83 Status:created}
	I1123 10:59:10.857709 1801378 cri.go:135] skipping {d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83 created}: state = "created", want "paused"
	I1123 10:59:10.857719 1801378 cri.go:129] container: {ID:ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22 Status:created}
	I1123 10:59:10.857746 1801378 cri.go:131] skipping ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22 - not in ps
	I1123 10:59:10.857811 1801378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:59:10.866782 1801378 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:59:10.866815 1801378 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:59:10.866878 1801378 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:59:10.878714 1801378 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:59:10.880264 1801378 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-055571" does not appear in /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:59:10.880825 1801378 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-1582671/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-055571" cluster setting kubeconfig missing "no-preload-055571" context setting]
	I1123 10:59:10.881619 1801378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:59:10.884809 1801378 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:59:10.907981 1801378 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 10:59:10.908018 1801378 kubeadm.go:602] duration metric: took 41.196108ms to restartPrimaryControlPlane
	I1123 10:59:10.908036 1801378 kubeadm.go:403] duration metric: took 158.935993ms to StartCluster
	I1123 10:59:10.908052 1801378 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:59:10.908124 1801378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:59:10.909720 1801378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:59:10.910376 1801378 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:59:10.910153 1801378 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:59:10.910485 1801378 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:59:10.910771 1801378 addons.go:70] Setting storage-provisioner=true in profile "no-preload-055571"
	I1123 10:59:10.910788 1801378 addons.go:239] Setting addon storage-provisioner=true in "no-preload-055571"
	W1123 10:59:10.910795 1801378 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:59:10.910820 1801378 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:59:10.910868 1801378 addons.go:70] Setting default-storageclass=true in profile "no-preload-055571"
	I1123 10:59:10.910886 1801378 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-055571"
	I1123 10:59:10.911290 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.911344 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.911719 1801378 addons.go:70] Setting metrics-server=true in profile "no-preload-055571"
	I1123 10:59:10.911743 1801378 addons.go:239] Setting addon metrics-server=true in "no-preload-055571"
	W1123 10:59:10.911750 1801378 addons.go:248] addon metrics-server should already be in state true
	I1123 10:59:10.911782 1801378 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:59:10.912282 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.915875 1801378 out.go:179] * Verifying Kubernetes components...
	I1123 10:59:10.916108 1801378 addons.go:70] Setting dashboard=true in profile "no-preload-055571"
	I1123 10:59:10.916127 1801378 addons.go:239] Setting addon dashboard=true in "no-preload-055571"
	W1123 10:59:10.916151 1801378 addons.go:248] addon dashboard should already be in state true
	I1123 10:59:10.916189 1801378 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:59:10.916647 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.921982 1801378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:59:10.982352 1801378 addons.go:239] Setting addon default-storageclass=true in "no-preload-055571"
	W1123 10:59:10.982376 1801378 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:59:10.982400 1801378 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:59:10.986833 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.991562 1801378 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 10:59:10.994588 1801378 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 10:59:10.994610 1801378 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 10:59:10.994694 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:10.995101 1801378 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:59:10.998072 1801378 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:59:10.998147 1801378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:59:10.998278 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:11.037070 1801378 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:59:11.037236 1801378 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:59:11.037250 1801378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:59:11.037311 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:11.047479 1801378 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:59:11.055235 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:59:11.055267 1801378 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:59:11.055346 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:11.064685 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:11.105248 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:11.118415 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:11.119561 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:11.312012 1801378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:59:11.416683 1801378 node_ready.go:35] waiting up to 6m0s for node "no-preload-055571" to be "Ready" ...
	I1123 10:59:11.466961 1801378 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 10:59:11.467039 1801378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 10:59:11.530787 1801378 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 10:59:11.530864 1801378 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 10:59:11.564816 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:59:11.564879 1801378 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:59:11.604647 1801378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:59:11.645194 1801378 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 10:59:11.645272 1801378 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 10:59:11.657606 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:59:11.657683 1801378 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:59:11.693475 1801378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:59:11.779843 1801378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 10:59:11.781755 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:59:11.781816 1801378 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:59:11.847299 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:59:11.847372 1801378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:59:11.978390 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:59:11.978424 1801378 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:59:12.084291 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:59:12.084378 1801378 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:59:12.191841 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:59:12.191927 1801378 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:59:12.289103 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:59:12.289178 1801378 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:59:12.358422 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:59:12.358511 1801378 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:59:12.407244 1801378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:59:15.343674 1801378 node_ready.go:49] node "no-preload-055571" is "Ready"
	I1123 10:59:15.343702 1801378 node_ready.go:38] duration metric: took 3.926915129s for node "no-preload-055571" to be "Ready" ...
	I1123 10:59:15.343717 1801378 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:59:15.343775 1801378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:59:18.299860 1801378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.695142336s)
	I1123 10:59:18.299964 1801378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.606411044s)
	I1123 10:59:18.300109 1801378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.520186504s)
	I1123 10:59:18.300127 1801378 addons.go:495] Verifying addon metrics-server=true in "no-preload-055571"
	I1123 10:59:18.300224 1801378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.892901712s)
	I1123 10:59:18.300252 1801378 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.956465494s)
	I1123 10:59:18.300291 1801378 api_server.go:72] duration metric: took 7.389841623s to wait for apiserver process to appear ...
	I1123 10:59:18.300304 1801378 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:59:18.300320 1801378 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:59:18.303325 1801378 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-055571 addons enable metrics-server
	
	I1123 10:59:18.310685 1801378 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:59:18.310713 1801378 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:59:18.317315 1801378 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	90081b0267e79       1611cd07b61d5       7 seconds ago        Running             busybox                   0                   a24840f1b442c       busybox                                      default
	3fd2478f130bc       138784d87c9c5       13 seconds ago       Running             coredns                   0                   b879e9f20c21e       coredns-66bc5c9577-pgvtk                     kube-system
	2f5d06eedabf2       ba04bb24b9575       13 seconds ago       Running             storage-provisioner       0                   d19bce664a79c       storage-provisioner                          kube-system
	8c09c9a1e0dd4       b1a8c6f707935       54 seconds ago       Running             kindnet-cni               0                   d7cdde11f0797       kindnet-969gr                                kube-system
	9947f7108490a       05baa95f5142d       54 seconds ago       Running             kube-proxy                0                   16cda86e8fa3e       kube-proxy-dsz2q                             kube-system
	c1b9d044a1c2b       a1894772a478e       About a minute ago   Running             etcd                      0                   e5cb3eab2523f       etcd-embed-certs-969029                      kube-system
	d97351f624889       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   ca95abfb5d4cd       kube-apiserver-embed-certs-969029            kube-system
	88e7145750dc1       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   374fc6e71ba48       kube-controller-manager-embed-certs-969029   kube-system
	bf863fc5d6205       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   006b36ea1d9a3       kube-scheduler-embed-certs-969029            kube-system
	
	
	==> containerd <==
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.768335203Z" level=info msg="Container 3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.792307825Z" level=info msg="CreateContainer within sandbox \"d19bce664a79c1e8fc8fa446843807402d9198cd5d03a05a1a67df00e2c33fc1\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052\""
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.795767038Z" level=info msg="StartContainer for \"2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052\""
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.798617725Z" level=info msg="connecting to shim 2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052" address="unix:///run/containerd/s/e52f12b56e3ca00f2027fc609d3a20e8a314e50765fbc07209fc2649bc48e4b6" protocol=ttrpc version=3
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.800878855Z" level=info msg="CreateContainer within sandbox \"b879e9f20c21e8faa9c7387df36c30c425862429584526369261f0be3a746252\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5\""
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.801611006Z" level=info msg="StartContainer for \"3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5\""
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.802783678Z" level=info msg="connecting to shim 3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5" address="unix:///run/containerd/s/f4b598774fbad6b7ef416c6e46d2560911a8c368f3455e3ced7aaf8d57e82073" protocol=ttrpc version=3
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.946704019Z" level=info msg="StartContainer for \"2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052\" returns successfully"
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.951051277Z" level=info msg="StartContainer for \"3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5\" returns successfully"
	Nov 23 10:59:10 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:10.219063464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:976d8660-27e9-4d64-bcea-5f2857bfbd4f,Namespace:default,Attempt:0,}"
	Nov 23 10:59:10 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:10.295720346Z" level=info msg="connecting to shim a24840f1b442cc75d449f58849a383fc3412ee403b88ea57012f0ee012264e5e" address="unix:///run/containerd/s/3846421d9eab7c280387efdcbe0b3c79b30e653055d9e15f35fb3938b331c676" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 10:59:10 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:10.406862362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:976d8660-27e9-4d64-bcea-5f2857bfbd4f,Namespace:default,Attempt:0,} returns sandbox id \"a24840f1b442cc75d449f58849a383fc3412ee403b88ea57012f0ee012264e5e\""
	Nov 23 10:59:10 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:10.412449195Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.547131114Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.550353033Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.552971250Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.557112891Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.558551617Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.144689533s"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.558707092Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.573733808Z" level=info msg="CreateContainer within sandbox \"a24840f1b442cc75d449f58849a383fc3412ee403b88ea57012f0ee012264e5e\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.586546785Z" level=info msg="Container 90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.598709202Z" level=info msg="CreateContainer within sandbox \"a24840f1b442cc75d449f58849a383fc3412ee403b88ea57012f0ee012264e5e\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58\""
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.601803813Z" level=info msg="StartContainer for \"90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58\""
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.606078145Z" level=info msg="connecting to shim 90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58" address="unix:///run/containerd/s/3846421d9eab7c280387efdcbe0b3c79b30e653055d9e15f35fb3938b331c676" protocol=ttrpc version=3
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.730705441Z" level=info msg="StartContainer for \"90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58\" returns successfully"
	
	
	==> coredns [3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58600 - 19918 "HINFO IN 5048464527559782916.8105685143490074977. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029402594s
	
	
	==> describe nodes <==
	Name:               embed-certs-969029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-969029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-969029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_58_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:58:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-969029
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:59:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:59:06 +0000   Sun, 23 Nov 2025 10:58:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:59:06 +0000   Sun, 23 Nov 2025 10:58:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:59:06 +0000   Sun, 23 Nov 2025 10:58:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:59:06 +0000   Sun, 23 Nov 2025 10:59:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-969029
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                60a0da82-30dc-42f4-8f94-24e171ac05b5
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-pgvtk                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     56s
	  kube-system                 etcd-embed-certs-969029                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-969gr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      56s
	  kube-system                 kube-apiserver-embed-certs-969029             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-969029    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-dsz2q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-scheduler-embed-certs-969029             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node embed-certs-969029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node embed-certs-969029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x7 over 70s)  kubelet          Node embed-certs-969029 status is now: NodeHasSufficientPID
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s                kubelet          Node embed-certs-969029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s                kubelet          Node embed-certs-969029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s                kubelet          Node embed-certs-969029 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s                node-controller  Node embed-certs-969029 event: Registered Node embed-certs-969029 in Controller
	  Normal   NodeReady                14s                kubelet          Node embed-certs-969029 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:50] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [c1b9d044a1c2bc1b727f46490e6f0d365dc6c431bca64ca89948b479a95835df] <==
	{"level":"warn","ts":"2025-11-23T10:58:15.153270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.178163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.201661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.213729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.232030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.254157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.275633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.300272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.310438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.328584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.372987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.417438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.430254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.456657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.472216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.492620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.528164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.543980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.557179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.582219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.597244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.614524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.631272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.644078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.727084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:59:20 up 11:41,  0 user,  load average: 3.28, 3.19, 2.88
	Linux embed-certs-969029 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8c09c9a1e0dd488c3ae89f22758bdfbc3ffc7ede552c1c94e903d4ace20016cc] <==
	I1123 10:58:25.962761       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:58:25.963037       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:58:26.027308       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:58:26.027347       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:58:26.027376       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:58:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:58:26.136277       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:58:26.136302       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:58:26.136310       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:58:26.229476       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:58:56.136841       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:58:56.136843       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:58:56.229467       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:58:56.229490       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 10:58:57.736487       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:58:57.736519       1 metrics.go:72] Registering metrics
	I1123 10:58:57.736737       1 controller.go:711] "Syncing nftables rules"
	I1123 10:59:06.143113       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:59:06.143173       1 main.go:301] handling current node
	I1123 10:59:16.137400       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:59:16.137525       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d97351f624889f17b6b6beb7b97f46e4761bff0db0ac24e7478af3bcafd0c577] <==
	I1123 10:58:16.793927       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:58:16.794338       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:58:16.795439       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:58:16.796590       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:16.797287       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:58:16.808170       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:58:16.809608       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:16.825497       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:58:17.391499       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:58:17.400812       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:58:17.400835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:58:18.483921       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:58:18.540991       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:58:18.654280       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:58:18.720232       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:58:18.737182       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 10:58:18.738861       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:58:18.748796       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:58:19.816378       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:58:19.841132       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:58:19.863171       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:58:24.401073       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:58:24.550995       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:24.558677       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:24.597860       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [88e7145750dc13e96e110d81fec6e8e8687bddaf3b752577c8c8542c93a7af25] <==
	I1123 10:58:23.705030       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:58:23.705144       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-969029"
	I1123 10:58:23.705216       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 10:58:23.705279       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:58:23.705371       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:58:23.705455       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:58:23.705478       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:58:23.705553       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:58:23.705627       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:58:23.706268       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:58:23.706550       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:58:23.706841       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:58:23.706869       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:58:23.707211       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:58:23.707249       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:58:23.707738       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 10:58:23.708579       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:58:23.710207       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:58:23.710551       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:58:23.712636       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:58:23.718628       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:58:23.730148       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:58:23.742596       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:58:23.744618       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:59:08.712384       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9947f7108490a4550e6b3803b512a0e2c01bf5577c5ff272a044aae4140be053] <==
	I1123 10:58:25.905556       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:58:26.035701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:58:26.144788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:58:26.144831       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:58:26.144906       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:58:26.340834       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:58:26.341397       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:58:26.353098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:58:26.353629       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:58:26.354073       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:58:26.356820       1 config.go:200] "Starting service config controller"
	I1123 10:58:26.356977       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:58:26.357173       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:58:26.358003       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:58:26.358153       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:58:26.358239       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:58:26.359756       1 config.go:309] "Starting node config controller"
	I1123 10:58:26.359877       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:58:26.359969       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:58:26.457184       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:58:26.458437       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:58:26.458451       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bf863fc5d6205f9bf643fb75c7033ef0cd9446ec28cb1351f790d8085f7a4125] <==
	E1123 10:58:16.778047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:58:16.783841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:58:16.783913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:58:16.783971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:58:16.784011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:58:16.784046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:58:16.784167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:58:16.784579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:58:16.785269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:58:16.785325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:58:16.789402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:58:17.587049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:58:17.591652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:58:17.622454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:58:17.795380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:58:17.825272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:58:17.897628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:58:17.925715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 10:58:17.926091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:58:17.956904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:58:17.959394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:58:18.040383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:58:18.084219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:58:18.134640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1123 10:58:20.842499       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:58:21 embed-certs-969029 kubelet[1470]: I1123 10:58:21.360514    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-969029" podStartSLOduration=1.3604864700000001 podStartE2EDuration="1.36048647s" podCreationTimestamp="2025-11-23 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:21.326087755 +0000 UTC m=+1.561526613" watchObservedRunningTime="2025-11-23 10:58:21.36048647 +0000 UTC m=+1.595925328"
	Nov 23 10:58:21 embed-certs-969029 kubelet[1470]: I1123 10:58:21.380137    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-969029" podStartSLOduration=1.380117224 podStartE2EDuration="1.380117224s" podCreationTimestamp="2025-11-23 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:21.36095027 +0000 UTC m=+1.596389145" watchObservedRunningTime="2025-11-23 10:58:21.380117224 +0000 UTC m=+1.615556081"
	Nov 23 10:58:21 embed-certs-969029 kubelet[1470]: I1123 10:58:21.423595    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-969029" podStartSLOduration=1.42357606 podStartE2EDuration="1.42357606s" podCreationTimestamp="2025-11-23 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:21.380387371 +0000 UTC m=+1.615826237" watchObservedRunningTime="2025-11-23 10:58:21.42357606 +0000 UTC m=+1.659014926"
	Nov 23 10:58:21 embed-certs-969029 kubelet[1470]: I1123 10:58:21.423718    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-969029" podStartSLOduration=1.423712672 podStartE2EDuration="1.423712672s" podCreationTimestamp="2025-11-23 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:21.423242825 +0000 UTC m=+1.658681789" watchObservedRunningTime="2025-11-23 10:58:21.423712672 +0000 UTC m=+1.659151530"
	Nov 23 10:58:23 embed-certs-969029 kubelet[1470]: I1123 10:58:23.678668    1470 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 10:58:23 embed-certs-969029 kubelet[1470]: I1123 10:58:23.679698    1470 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750089    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/002c9f0c-528d-4eed-b241-435de51af248-xtables-lock\") pod \"kube-proxy-dsz2q\" (UID: \"002c9f0c-528d-4eed-b241-435de51af248\") " pod="kube-system/kube-proxy-dsz2q"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750144    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/da716ec2-e4e8-4663-a452-0c9925b721e1-cni-cfg\") pod \"kindnet-969gr\" (UID: \"da716ec2-e4e8-4663-a452-0c9925b721e1\") " pod="kube-system/kindnet-969gr"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750169    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da716ec2-e4e8-4663-a452-0c9925b721e1-xtables-lock\") pod \"kindnet-969gr\" (UID: \"da716ec2-e4e8-4663-a452-0c9925b721e1\") " pod="kube-system/kindnet-969gr"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750189    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmbsj\" (UniqueName: \"kubernetes.io/projected/da716ec2-e4e8-4663-a452-0c9925b721e1-kube-api-access-qmbsj\") pod \"kindnet-969gr\" (UID: \"da716ec2-e4e8-4663-a452-0c9925b721e1\") " pod="kube-system/kindnet-969gr"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750236    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da716ec2-e4e8-4663-a452-0c9925b721e1-lib-modules\") pod \"kindnet-969gr\" (UID: \"da716ec2-e4e8-4663-a452-0c9925b721e1\") " pod="kube-system/kindnet-969gr"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750268    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/002c9f0c-528d-4eed-b241-435de51af248-kube-proxy\") pod \"kube-proxy-dsz2q\" (UID: \"002c9f0c-528d-4eed-b241-435de51af248\") " pod="kube-system/kube-proxy-dsz2q"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750288    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/002c9f0c-528d-4eed-b241-435de51af248-lib-modules\") pod \"kube-proxy-dsz2q\" (UID: \"002c9f0c-528d-4eed-b241-435de51af248\") " pod="kube-system/kube-proxy-dsz2q"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750311    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4djn5\" (UniqueName: \"kubernetes.io/projected/002c9f0c-528d-4eed-b241-435de51af248-kube-api-access-4djn5\") pod \"kube-proxy-dsz2q\" (UID: \"002c9f0c-528d-4eed-b241-435de51af248\") " pod="kube-system/kube-proxy-dsz2q"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.939373    1470 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:58:26 embed-certs-969029 kubelet[1470]: I1123 10:58:26.359230    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-969gr" podStartSLOduration=2.359170578 podStartE2EDuration="2.359170578s" podCreationTimestamp="2025-11-23 10:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:26.332544418 +0000 UTC m=+6.567983276" watchObservedRunningTime="2025-11-23 10:58:26.359170578 +0000 UTC m=+6.594609444"
	Nov 23 10:58:30 embed-certs-969029 kubelet[1470]: I1123 10:58:30.188303    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dsz2q" podStartSLOduration=6.188284239 podStartE2EDuration="6.188284239s" podCreationTimestamp="2025-11-23 10:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:26.365445929 +0000 UTC m=+6.600884795" watchObservedRunningTime="2025-11-23 10:58:30.188284239 +0000 UTC m=+10.423723105"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.237893    1470 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.456332    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dec18915-2717-4390-96b8-95f56ec7405f-tmp\") pod \"storage-provisioner\" (UID: \"dec18915-2717-4390-96b8-95f56ec7405f\") " pod="kube-system/storage-provisioner"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.456389    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4s9m\" (UniqueName: \"kubernetes.io/projected/dec18915-2717-4390-96b8-95f56ec7405f-kube-api-access-f4s9m\") pod \"storage-provisioner\" (UID: \"dec18915-2717-4390-96b8-95f56ec7405f\") " pod="kube-system/storage-provisioner"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.456414    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81912f6a-cbf4-4bd7-84ac-ca2ffc36269c-config-volume\") pod \"coredns-66bc5c9577-pgvtk\" (UID: \"81912f6a-cbf4-4bd7-84ac-ca2ffc36269c\") " pod="kube-system/coredns-66bc5c9577-pgvtk"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.456432    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bkzr\" (UniqueName: \"kubernetes.io/projected/81912f6a-cbf4-4bd7-84ac-ca2ffc36269c-kube-api-access-5bkzr\") pod \"coredns-66bc5c9577-pgvtk\" (UID: \"81912f6a-cbf4-4bd7-84ac-ca2ffc36269c\") " pod="kube-system/coredns-66bc5c9577-pgvtk"
	Nov 23 10:59:07 embed-certs-969029 kubelet[1470]: I1123 10:59:07.457438    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pgvtk" podStartSLOduration=43.457421411 podStartE2EDuration="43.457421411s" podCreationTimestamp="2025-11-23 10:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:59:07.421228503 +0000 UTC m=+47.656667361" watchObservedRunningTime="2025-11-23 10:59:07.457421411 +0000 UTC m=+47.692860269"
	Nov 23 10:59:09 embed-certs-969029 kubelet[1470]: I1123 10:59:09.904707    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.904686817 podStartE2EDuration="43.904686817s" podCreationTimestamp="2025-11-23 10:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:59:07.489088408 +0000 UTC m=+47.724527274" watchObservedRunningTime="2025-11-23 10:59:09.904686817 +0000 UTC m=+50.140125675"
	Nov 23 10:59:09 embed-certs-969029 kubelet[1470]: I1123 10:59:09.984948    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz4k9\" (UniqueName: \"kubernetes.io/projected/976d8660-27e9-4d64-bcea-5f2857bfbd4f-kube-api-access-zz4k9\") pod \"busybox\" (UID: \"976d8660-27e9-4d64-bcea-5f2857bfbd4f\") " pod="default/busybox"
	
	
	==> storage-provisioner [2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052] <==
	I1123 10:59:06.951678       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 10:59:06.995561       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 10:59:06.995610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:59:07.001019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:07.010669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:59:07.010828       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:59:07.011004       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-969029_0feb64e6-d700-447f-8805-add762a268fd!
	I1123 10:59:07.012033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c09876f5-d9bf-4563-886d-c5272f70f415", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-969029_0feb64e6-d700-447f-8805-add762a268fd became leader
	W1123 10:59:07.017753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:07.023684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:59:07.112067       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-969029_0feb64e6-d700-447f-8805-add762a268fd!
	W1123 10:59:09.027407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:09.035065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:11.043788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:11.083856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:13.087753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:13.094431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:15.098372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:15.107423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:17.112088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:17.122237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:19.126037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:19.131073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-969029 -n embed-certs-969029
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-969029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-969029
helpers_test.go:243: (dbg) docker inspect embed-certs-969029:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de",
	        "Created": "2025-11-23T10:57:50.842484184Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1796233,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T10:57:51.032485172Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de/hostname",
	        "HostsPath": "/var/lib/docker/containers/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de/hosts",
	        "LogPath": "/var/lib/docker/containers/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de/d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de-json.log",
	        "Name": "/embed-certs-969029",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-969029:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-969029",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d3cb17036a8e7a9743a92a5c4c11ab99f53a0ace28f44400e4e041b9a01919de",
	                "LowerDir": "/var/lib/docker/overlay2/97f5a9880020c43a7a82b7f70bd6dd89e6b4a203d995cd2567245240cbae9ffc-init/diff:/var/lib/docker/overlay2/fe0bef51c968206096993e9a75db2143cd9cd74d56696a257291ce63f851a2d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/97f5a9880020c43a7a82b7f70bd6dd89e6b4a203d995cd2567245240cbae9ffc/merged",
	                "UpperDir": "/var/lib/docker/overlay2/97f5a9880020c43a7a82b7f70bd6dd89e6b4a203d995cd2567245240cbae9ffc/diff",
	                "WorkDir": "/var/lib/docker/overlay2/97f5a9880020c43a7a82b7f70bd6dd89e6b4a203d995cd2567245240cbae9ffc/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-969029",
	                "Source": "/var/lib/docker/volumes/embed-certs-969029/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-969029",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-969029",
	                "name.minikube.sigs.k8s.io": "embed-certs-969029",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "50021cc80909e5d261e7d6437ae7441cc6b5b829f27cd62f1598ce5e3268821f",
	            "SandboxKey": "/var/run/docker/netns/50021cc80909",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35269"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35270"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35273"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35271"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35272"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-969029": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:5f:37:4f:d4:79",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "6f58dbb248072889615a2f552ac4d5890af6f4b7a41194d40d66ae581236eb94",
	                    "EndpointID": "8dc7ab379bf3b18e6b137570b8eab51a23befb49b0d67d2ea9b90031e0f21ac5",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-969029",
	                        "d3cb17036a8e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-969029 -n embed-certs-969029
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-969029 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-969029 logs -n 25: (1.525379732s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │         PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-679101   │ jenkins │ v1.37.0 │ 23 Nov 25 10:53 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ force-systemd-env-479166 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-479166 │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p force-systemd-env-479166                                                                                                                                                                                                                         │ force-systemd-env-479166 │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p cert-options-501705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-501705      │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ cert-options-501705 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-501705      │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ ssh     │ -p cert-options-501705 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-501705      │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ delete  │ -p cert-options-501705                                                                                                                                                                                                                              │ cert-options-501705      │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:54 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:54 UTC │ 23 Nov 25 10:55 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-162750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:55 UTC │ 23 Nov 25 10:55 UTC │
	│ stop    │ -p old-k8s-version-162750 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:55 UTC │ 23 Nov 25 10:56 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-162750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:56 UTC │ 23 Nov 25 10:56 UTC │
	│ start   │ -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:56 UTC │ 23 Nov 25 10:57 UTC │
	│ image   │ old-k8s-version-162750 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ pause   │ -p old-k8s-version-162750 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ unpause │ -p old-k8s-version-162750 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p old-k8s-version-162750                                                                                                                                                                                                                           │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p old-k8s-version-162750                                                                                                                                                                                                                           │ old-k8s-version-162750   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ start   │ -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:58 UTC │
	│ start   │ -p cert-expiration-679101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-679101   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ delete  │ -p cert-expiration-679101                                                                                                                                                                                                                           │ cert-expiration-679101   │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:57 UTC │
	│ start   │ -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-969029       │ jenkins │ v1.37.0 │ 23 Nov 25 10:57 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable metrics-server -p no-preload-055571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:58 UTC │ 23 Nov 25 10:58 UTC │
	│ stop    │ -p no-preload-055571 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:58 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable dashboard -p no-preload-055571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ start   │ -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-055571        │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:59:03
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:59:03.121926 1801378 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:59:03.122055 1801378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:59:03.122066 1801378 out.go:374] Setting ErrFile to fd 2...
	I1123 10:59:03.122071 1801378 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:59:03.122312 1801378 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:59:03.122661 1801378 out.go:368] Setting JSON to false
	I1123 10:59:03.123644 1801378 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":42088,"bootTime":1763853455,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:59:03.123715 1801378 start.go:143] virtualization:  
	I1123 10:59:03.129025 1801378 out.go:179] * [no-preload-055571] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:59:03.132201 1801378 notify.go:221] Checking for updates...
	I1123 10:59:03.132689 1801378 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:59:03.135751 1801378 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:59:03.138825 1801378 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:59:03.141763 1801378 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:59:03.144829 1801378 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:59:03.147794 1801378 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:59:03.151248 1801378 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:59:03.151861 1801378 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:59:03.196210 1801378 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:59:03.196323 1801378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:59:03.246324 1801378 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:59:03.237313044 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:59:03.246430 1801378 docker.go:319] overlay module found
	I1123 10:59:03.249572 1801378 out.go:179] * Using the docker driver based on existing profile
	I1123 10:59:03.252349 1801378 start.go:309] selected driver: docker
	I1123 10:59:03.252368 1801378 start.go:927] validating driver "docker" against &{Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Moun
t9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:59:03.252471 1801378 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:59:03.253163 1801378 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:59:03.315892 1801378 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:59:03.307405315 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:59:03.316266 1801378 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:59:03.316299 1801378 cni.go:84] Creating CNI manager for ""
	I1123 10:59:03.316354 1801378 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:59:03.316396 1801378 start.go:353] cluster config:
	{Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:59:03.319453 1801378 out.go:179] * Starting "no-preload-055571" primary control-plane node in "no-preload-055571" cluster
	I1123 10:59:03.322189 1801378 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 10:59:03.324980 1801378 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:59:03.327707 1801378 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:59:03.327781 1801378 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:59:03.327849 1801378 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/config.json ...
	I1123 10:59:03.328125 1801378 cache.go:107] acquiring lock: {Name:mka2b8a0dc7618a4186b53c7030334ba2743726d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328201 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 10:59:03.328212 1801378 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 96.31µs
	I1123 10:59:03.328224 1801378 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 10:59:03.328237 1801378 cache.go:107] acquiring lock: {Name:mked64c05cbe29983e52beb0ede9990c8342d2c7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328269 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 10:59:03.328283 1801378 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1" took 47.892µs
	I1123 10:59:03.328290 1801378 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 10:59:03.328300 1801378 cache.go:107] acquiring lock: {Name:mkf03376969352d5c6ab837edabe530dd846c7fb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328331 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 10:59:03.328336 1801378 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1" took 37.644µs
	I1123 10:59:03.328342 1801378 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 10:59:03.328355 1801378 cache.go:107] acquiring lock: {Name:mke7eca088924ba5278c3e6f5a538cf5ffb363c5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328380 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 10:59:03.328385 1801378 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1" took 30.908µs
	I1123 10:59:03.328391 1801378 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 10:59:03.328399 1801378 cache.go:107] acquiring lock: {Name:mk02bc90fcdcdd13d6b6d27fb406e2594c2c7586 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328425 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 10:59:03.328430 1801378 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1" took 31.917µs
	I1123 10:59:03.328436 1801378 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 10:59:03.328418 1801378 cache.go:107] acquiring lock: {Name:mkf752427a5200a1894b6bd7cc1653f9055ce950 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328444 1801378 cache.go:107] acquiring lock: {Name:mk0ec107985b2e86560d8614a0f0ddd0531b9cb4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328466 1801378 cache.go:107] acquiring lock: {Name:mk09ce98fa221f42abb869ee72c86d69dd16d276 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.328493 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I1123 10:59:03.328497 1801378 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 32.508µs
	I1123 10:59:03.328504 1801378 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 10:59:03.328499 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 10:59:03.328513 1801378 cache.go:115] /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 10:59:03.328522 1801378 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 110.603µs
	I1123 10:59:03.328528 1801378 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 10:59:03.328513 1801378 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 70.029µs
	I1123 10:59:03.328535 1801378 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 10:59:03.328541 1801378 cache.go:87] Successfully saved all images to host disk.
	I1123 10:59:03.346738 1801378 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 10:59:03.346761 1801378 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 10:59:03.346777 1801378 cache.go:243] Successfully downloaded all kic artifacts
	I1123 10:59:03.346807 1801378 start.go:360] acquireMachinesLock for no-preload-055571: {Name:mk3ea9b9eaa721e5203c43f3e725422cc94c48a7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 10:59:03.346865 1801378 start.go:364] duration metric: took 37.316µs to acquireMachinesLock for "no-preload-055571"
	I1123 10:59:03.346891 1801378 start.go:96] Skipping create...Using existing machine configuration
	I1123 10:59:03.346902 1801378 fix.go:54] fixHost starting: 
	I1123 10:59:03.347161 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:03.364311 1801378 fix.go:112] recreateIfNeeded on no-preload-055571: state=Stopped err=<nil>
	W1123 10:59:03.364360 1801378 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 10:59:04.257574 1795697 node_ready.go:57] node "embed-certs-969029" has "Ready":"False" status (will retry)
	I1123 10:59:06.259468 1795697 node_ready.go:49] node "embed-certs-969029" is "Ready"
	I1123 10:59:06.259502 1795697 node_ready.go:38] duration metric: took 40.505264475s for node "embed-certs-969029" to be "Ready" ...
	I1123 10:59:06.259520 1795697 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:59:06.259580 1795697 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:59:06.279807 1795697 api_server.go:72] duration metric: took 41.457383999s to wait for apiserver process to appear ...
	I1123 10:59:06.279836 1795697 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:59:06.279856 1795697 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 10:59:06.288082 1795697 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 10:59:06.289059 1795697 api_server.go:141] control plane version: v1.34.1
	I1123 10:59:06.289084 1795697 api_server.go:131] duration metric: took 9.241619ms to wait for apiserver health ...
	I1123 10:59:06.289093 1795697 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 10:59:06.295051 1795697 system_pods.go:59] 8 kube-system pods found
	I1123 10:59:06.295086 1795697 system_pods.go:61] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending
	I1123 10:59:06.295093 1795697 system_pods.go:61] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:06.295098 1795697 system_pods.go:61] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:06.295102 1795697 system_pods.go:61] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:06.295106 1795697 system_pods.go:61] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:06.295110 1795697 system_pods.go:61] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:06.295114 1795697 system_pods.go:61] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:06.295118 1795697 system_pods.go:61] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending
	I1123 10:59:06.295124 1795697 system_pods.go:74] duration metric: took 6.024698ms to wait for pod list to return data ...
	I1123 10:59:06.295131 1795697 default_sa.go:34] waiting for default service account to be created ...
	I1123 10:59:06.297744 1795697 default_sa.go:45] found service account: "default"
	I1123 10:59:06.297771 1795697 default_sa.go:55] duration metric: took 2.633921ms for default service account to be created ...
	I1123 10:59:06.297781 1795697 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 10:59:06.300506 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:06.300532 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending
	I1123 10:59:06.300538 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:06.300552 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:06.300558 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:06.300563 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:06.300567 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:06.300571 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:06.300575 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending
	I1123 10:59:06.300602 1795697 retry.go:31] will retry after 251.978529ms: missing components: kube-dns
	I1123 10:59:06.556277 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:06.556314 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:59:06.556320 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:06.556327 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:06.556336 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:06.556341 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:06.556345 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:06.556349 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:06.556355 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:59:06.556368 1795697 retry.go:31] will retry after 326.704802ms: missing components: kube-dns
	I1123 10:59:06.888108 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:06.888143 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:59:06.888150 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:06.888156 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:06.888161 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:06.888166 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:06.888171 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:06.888176 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:06.888181 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:59:06.888200 1795697 retry.go:31] will retry after 482.718586ms: missing components: kube-dns
	I1123 10:59:07.380060 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:07.380091 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 10:59:07.380098 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:07.380103 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:07.380108 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:07.380114 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:07.380118 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:07.380121 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:07.380127 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 10:59:07.380141 1795697 retry.go:31] will retry after 524.899428ms: missing components: kube-dns
	I1123 10:59:03.367692 1801378 out.go:252] * Restarting existing docker container for "no-preload-055571" ...
	I1123 10:59:03.367780 1801378 cli_runner.go:164] Run: docker start no-preload-055571
	I1123 10:59:03.639681 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:03.666085 1801378 kic.go:430] container "no-preload-055571" state is running.
	I1123 10:59:03.666481 1801378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-055571
	I1123 10:59:03.688932 1801378 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/config.json ...
	I1123 10:59:03.689160 1801378 machine.go:94] provisionDockerMachine start ...
	I1123 10:59:03.689234 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:03.710099 1801378 main.go:143] libmachine: Using SSH client type: native
	I1123 10:59:03.710419 1801378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35274 <nil> <nil>}
	I1123 10:59:03.710427 1801378 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 10:59:03.711356 1801378 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 10:59:06.875581 1801378 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-055571
	
	I1123 10:59:06.875605 1801378 ubuntu.go:182] provisioning hostname "no-preload-055571"
	I1123 10:59:06.875671 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:06.900928 1801378 main.go:143] libmachine: Using SSH client type: native
	I1123 10:59:06.901237 1801378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35274 <nil> <nil>}
	I1123 10:59:06.901248 1801378 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-055571 && echo "no-preload-055571" | sudo tee /etc/hostname
	I1123 10:59:07.087756 1801378 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-055571
	
	I1123 10:59:07.087849 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.105470 1801378 main.go:143] libmachine: Using SSH client type: native
	I1123 10:59:07.105795 1801378 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35274 <nil> <nil>}
	I1123 10:59:07.105819 1801378 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-055571' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-055571/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-055571' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 10:59:07.255239 1801378 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 10:59:07.255268 1801378 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-1582671/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-1582671/.minikube}
	I1123 10:59:07.255287 1801378 ubuntu.go:190] setting up certificates
	I1123 10:59:07.255296 1801378 provision.go:84] configureAuth start
	I1123 10:59:07.255362 1801378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-055571
	I1123 10:59:07.277246 1801378 provision.go:143] copyHostCerts
	I1123 10:59:07.277327 1801378 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem, removing ...
	I1123 10:59:07.277340 1801378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem
	I1123 10:59:07.277418 1801378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem (1078 bytes)
	I1123 10:59:07.277527 1801378 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem, removing ...
	I1123 10:59:07.277537 1801378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem
	I1123 10:59:07.277564 1801378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem (1123 bytes)
	I1123 10:59:07.277629 1801378 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem, removing ...
	I1123 10:59:07.277642 1801378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem
	I1123 10:59:07.277667 1801378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem (1675 bytes)
	I1123 10:59:07.277726 1801378 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem org=jenkins.no-preload-055571 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-055571]
	I1123 10:59:07.503680 1801378 provision.go:177] copyRemoteCerts
	I1123 10:59:07.503753 1801378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 10:59:07.503801 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.522336 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:07.630920 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 10:59:07.648953 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 10:59:07.666459 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 10:59:07.685616 1801378 provision.go:87] duration metric: took 430.297483ms to configureAuth
	I1123 10:59:07.685699 1801378 ubuntu.go:206] setting minikube options for container-runtime
	I1123 10:59:07.685939 1801378 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:59:07.685956 1801378 machine.go:97] duration metric: took 3.996782916s to provisionDockerMachine
	I1123 10:59:07.685965 1801378 start.go:293] postStartSetup for "no-preload-055571" (driver="docker")
	I1123 10:59:07.685988 1801378 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 10:59:07.686046 1801378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 10:59:07.686117 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.703283 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:07.810843 1801378 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 10:59:07.814039 1801378 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 10:59:07.814066 1801378 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 10:59:07.814077 1801378 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/addons for local assets ...
	I1123 10:59:07.814127 1801378 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/files for local assets ...
	I1123 10:59:07.814212 1801378 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem -> 15845322.pem in /etc/ssl/certs
	I1123 10:59:07.814317 1801378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 10:59:07.821461 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:59:07.838573 1801378 start.go:296] duration metric: took 152.592595ms for postStartSetup
	I1123 10:59:07.838673 1801378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:59:07.838717 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.855946 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:07.959985 1801378 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 10:59:07.964454 1801378 fix.go:56] duration metric: took 4.61754418s for fixHost
	I1123 10:59:07.964478 1801378 start.go:83] releasing machines lock for "no-preload-055571", held for 4.61759894s
	I1123 10:59:07.964547 1801378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-055571
	I1123 10:59:07.981265 1801378 ssh_runner.go:195] Run: cat /version.json
	I1123 10:59:07.981337 1801378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 10:59:07.981354 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:07.981390 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:08.002941 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:08.007232 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:08.217513 1801378 ssh_runner.go:195] Run: systemctl --version
	I1123 10:59:08.223881 1801378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 10:59:08.228203 1801378 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 10:59:08.228302 1801378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 10:59:08.238525 1801378 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 10:59:08.238553 1801378 start.go:496] detecting cgroup driver to use...
	I1123 10:59:08.238603 1801378 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 10:59:08.238657 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 10:59:08.256450 1801378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 10:59:08.276377 1801378 docker.go:218] disabling cri-docker service (if available) ...
	I1123 10:59:08.276469 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 10:59:08.292185 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 10:59:08.305378 1801378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 10:59:08.420141 1801378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 10:59:08.543693 1801378 docker.go:234] disabling docker service ...
	I1123 10:59:08.543831 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 10:59:08.560569 1801378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 10:59:08.573247 1801378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 10:59:08.695349 1801378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 10:59:08.822800 1801378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 10:59:08.838750 1801378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 10:59:08.855970 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 10:59:08.866019 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 10:59:08.875071 1801378 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 10:59:08.875240 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 10:59:08.884630 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:59:08.893099 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 10:59:08.901740 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 10:59:08.910017 1801378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 10:59:08.917595 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 10:59:08.925913 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 10:59:08.936081 1801378 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 10:59:08.945101 1801378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 10:59:08.953410 1801378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 10:59:08.960678 1801378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:59:09.080393 1801378 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 10:59:09.232443 1801378 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 10:59:09.232596 1801378 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 10:59:09.241856 1801378 start.go:564] Will wait 60s for crictl version
	I1123 10:59:09.241962 1801378 ssh_runner.go:195] Run: which crictl
	I1123 10:59:09.245894 1801378 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 10:59:09.277292 1801378 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 10:59:09.277444 1801378 ssh_runner.go:195] Run: containerd --version
	I1123 10:59:09.298858 1801378 ssh_runner.go:195] Run: containerd --version
	I1123 10:59:09.323188 1801378 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 10:59:07.909619 1795697 system_pods.go:86] 8 kube-system pods found
	I1123 10:59:07.909647 1795697 system_pods.go:89] "coredns-66bc5c9577-pgvtk" [81912f6a-cbf4-4bd7-84ac-ca2ffc36269c] Running
	I1123 10:59:07.909654 1795697 system_pods.go:89] "etcd-embed-certs-969029" [3f64dd66-dfcb-4459-88d9-27732fee506f] Running
	I1123 10:59:07.909658 1795697 system_pods.go:89] "kindnet-969gr" [da716ec2-e4e8-4663-a452-0c9925b721e1] Running
	I1123 10:59:07.909663 1795697 system_pods.go:89] "kube-apiserver-embed-certs-969029" [66bf8b35-a2b3-46d5-a600-ee62787ce764] Running
	I1123 10:59:07.909670 1795697 system_pods.go:89] "kube-controller-manager-embed-certs-969029" [5ff0d6c2-ffe3-4b7a-9835-d95ce446ed9c] Running
	I1123 10:59:07.909674 1795697 system_pods.go:89] "kube-proxy-dsz2q" [002c9f0c-528d-4eed-b241-435de51af248] Running
	I1123 10:59:07.909678 1795697 system_pods.go:89] "kube-scheduler-embed-certs-969029" [03ccae3e-a405-4f5a-9706-6ed1cc91924f] Running
	I1123 10:59:07.909682 1795697 system_pods.go:89] "storage-provisioner" [dec18915-2717-4390-96b8-95f56ec7405f] Running
	I1123 10:59:07.909689 1795697 system_pods.go:126] duration metric: took 1.611903084s to wait for k8s-apps to be running ...
	I1123 10:59:07.909695 1795697 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 10:59:07.909745 1795697 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:59:07.923682 1795697 system_svc.go:56] duration metric: took 13.97683ms WaitForService to wait for kubelet
	I1123 10:59:07.923707 1795697 kubeadm.go:587] duration metric: took 43.101288176s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 10:59:07.923725 1795697 node_conditions.go:102] verifying NodePressure condition ...
	I1123 10:59:07.926826 1795697 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 10:59:07.926901 1795697 node_conditions.go:123] node cpu capacity is 2
	I1123 10:59:07.926929 1795697 node_conditions.go:105] duration metric: took 3.197632ms to run NodePressure ...
	I1123 10:59:07.926971 1795697 start.go:242] waiting for startup goroutines ...
	I1123 10:59:07.926994 1795697 start.go:247] waiting for cluster config update ...
	I1123 10:59:07.927017 1795697 start.go:256] writing updated cluster config ...
	I1123 10:59:07.927401 1795697 ssh_runner.go:195] Run: rm -f paused
	I1123 10:59:07.930837 1795697 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:59:07.934456 1795697 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-pgvtk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.938849 1795697 pod_ready.go:94] pod "coredns-66bc5c9577-pgvtk" is "Ready"
	I1123 10:59:07.938917 1795697 pod_ready.go:86] duration metric: took 4.437028ms for pod "coredns-66bc5c9577-pgvtk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.940947 1795697 pod_ready.go:83] waiting for pod "etcd-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.945119 1795697 pod_ready.go:94] pod "etcd-embed-certs-969029" is "Ready"
	I1123 10:59:07.945148 1795697 pod_ready.go:86] duration metric: took 4.178761ms for pod "etcd-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.947075 1795697 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.951251 1795697 pod_ready.go:94] pod "kube-apiserver-embed-certs-969029" is "Ready"
	I1123 10:59:07.951277 1795697 pod_ready.go:86] duration metric: took 4.178015ms for pod "kube-apiserver-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:07.953259 1795697 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:08.334923 1795697 pod_ready.go:94] pod "kube-controller-manager-embed-certs-969029" is "Ready"
	I1123 10:59:08.334954 1795697 pod_ready.go:86] duration metric: took 381.674182ms for pod "kube-controller-manager-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:08.535462 1795697 pod_ready.go:83] waiting for pod "kube-proxy-dsz2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:08.935107 1795697 pod_ready.go:94] pod "kube-proxy-dsz2q" is "Ready"
	I1123 10:59:08.935129 1795697 pod_ready.go:86] duration metric: took 399.646474ms for pod "kube-proxy-dsz2q" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:09.135882 1795697 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:09.535348 1795697 pod_ready.go:94] pod "kube-scheduler-embed-certs-969029" is "Ready"
	I1123 10:59:09.535372 1795697 pod_ready.go:86] duration metric: took 399.466997ms for pod "kube-scheduler-embed-certs-969029" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 10:59:09.535385 1795697 pod_ready.go:40] duration metric: took 1.604522906s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 10:59:09.615514 1795697 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 10:59:09.619373 1795697 out.go:179] * Done! kubectl is now configured to use "embed-certs-969029" cluster and "default" namespace by default
	I1123 10:59:09.326166 1801378 cli_runner.go:164] Run: docker network inspect no-preload-055571 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 10:59:09.344982 1801378 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 10:59:09.348838 1801378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:59:09.358706 1801378 kubeadm.go:884] updating cluster {Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 10:59:09.358851 1801378 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 10:59:09.358906 1801378 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 10:59:09.384809 1801378 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 10:59:09.384833 1801378 cache_images.go:86] Images are preloaded, skipping loading
	I1123 10:59:09.384841 1801378 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 containerd true true} ...
	I1123 10:59:09.384941 1801378 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-055571 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 10:59:09.385013 1801378 ssh_runner.go:195] Run: sudo crictl info
	I1123 10:59:09.411314 1801378 cni.go:84] Creating CNI manager for ""
	I1123 10:59:09.411344 1801378 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:59:09.411364 1801378 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 10:59:09.411388 1801378 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-055571 NodeName:no-preload-055571 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 10:59:09.411501 1801378 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "no-preload-055571"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 10:59:09.411572 1801378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 10:59:09.420798 1801378 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 10:59:09.420920 1801378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 10:59:09.428155 1801378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 10:59:09.441361 1801378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 10:59:09.456410 1801378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2230 bytes)
	I1123 10:59:09.469357 1801378 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 10:59:09.472985 1801378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 10:59:09.482113 1801378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:59:09.613780 1801378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:59:09.633064 1801378 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571 for IP: 192.168.85.2
	I1123 10:59:09.633086 1801378 certs.go:195] generating shared ca certs ...
	I1123 10:59:09.633102 1801378 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:59:09.633239 1801378 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 10:59:09.633285 1801378 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 10:59:09.633297 1801378 certs.go:257] generating profile certs ...
	I1123 10:59:09.633428 1801378 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.key
	I1123 10:59:09.633516 1801378 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key.3d6856fb
	I1123 10:59:09.633563 1801378 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key
	I1123 10:59:09.633674 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 10:59:09.633709 1801378 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 10:59:09.633721 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 10:59:09.633750 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 10:59:09.633779 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 10:59:09.633807 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 10:59:09.633855 1801378 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 10:59:09.634581 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 10:59:09.689523 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 10:59:09.739831 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 10:59:09.811371 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 10:59:09.894159 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 10:59:09.936774 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 10:59:09.967536 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 10:59:09.985896 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 10:59:10.009672 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 10:59:10.031516 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 10:59:10.061642 1801378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 10:59:10.082255 1801378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 10:59:10.110140 1801378 ssh_runner.go:195] Run: openssl version
	I1123 10:59:10.119842 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 10:59:10.130187 1801378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:59:10.135373 1801378 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:59:10.135443 1801378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 10:59:10.183928 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 10:59:10.192197 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 10:59:10.200632 1801378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 10:59:10.204233 1801378 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 10:59:10.204295 1801378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 10:59:10.248721 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 10:59:10.257302 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 10:59:10.265452 1801378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 10:59:10.270002 1801378 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 10:59:10.270063 1801378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 10:59:10.312949 1801378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 10:59:10.321054 1801378 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 10:59:10.327382 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 10:59:10.402943 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 10:59:10.463434 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 10:59:10.524730 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 10:59:10.604457 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 10:59:10.692754 1801378 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 10:59:10.749103 1801378 kubeadm.go:401] StartCluster: {Name:no-preload-055571 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-055571 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:59:10.749206 1801378 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 10:59:10.749288 1801378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 10:59:10.818676 1801378 cri.go:89] found id: "d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83"
	I1123 10:59:10.818709 1801378 cri.go:89] found id: "141dfe2fe2c0f3cda59fd1829ec905adf738ec2a2b54570701779736a9b5c611"
	I1123 10:59:10.818715 1801378 cri.go:89] found id: "f6ff8574431495f4a49d9c3759b8049dfc4450cdb014fcd3928c598ca2c0da52"
	I1123 10:59:10.818726 1801378 cri.go:89] found id: "9d45eab165f426941b46cacf4c992c6d8d994ff8d83232faff07678871d4234f"
	I1123 10:59:10.818729 1801378 cri.go:89] found id: "8b471e7e9bbda9cbfbea76934750632ac310334af415b16e44073b2e576eabc9"
	I1123 10:59:10.818732 1801378 cri.go:89] found id: "2f827144cf7fac652ccb74aef0066e57b21ecef01a8dcb73809e96022b694400"
	I1123 10:59:10.818736 1801378 cri.go:89] found id: "14b800b67ad6052023ad76ace7ece6ce928c08d72e9876a0ba4ec63aa2fd2940"
	I1123 10:59:10.818738 1801378 cri.go:89] found id: "6249f178fb08fff7a76e05ef2091e7236bff165ee849beeba741138fd5d4e5d1"
	I1123 10:59:10.818742 1801378 cri.go:89] found id: "eab30623258b276d71d20e0094aa488fe2eaf689d062eb457557742f0cf5e8dd"
	I1123 10:59:10.818750 1801378 cri.go:89] found id: ""
	I1123 10:59:10.818819 1801378 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 10:59:10.857473 1801378 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664","pid":932,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664/rootfs","created":"2025-11-23T10:59:10.698789689Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-no-preload-055571_ce9e5a9dcaff1330f74c9cdff3f1a808","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"ce9e5a9dcaff1330f74c9cdff3f1a808"},"owner":"root"},{"ociVersion":"1.2.1","id":"b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0","pid":940,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0/rootfs","created":"2025-11-23T10:59:10.691440306Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0","io.kubernetes.c
ri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-no-preload-055571_19263a60045981406ff42a29aacfbe1d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"19263a60045981406ff42a29aacfbe1d"},"owner":"root"},{"ociVersion":"1.2.1","id":"b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396","pid":877,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396/rootfs","created":"2025-11-23T10:59:10.599997408Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-
quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-no-preload-055571_1009b601f1db86628e469db0a601cbf6","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1009b601f1db86628e469db0a601cbf6"},"owner":"root"},{"ociVersion":"1.2.1","id":"d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83","pid":993,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83/rootfs","created":"2025-11-23T10:59:10.836245069Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.cont
ainer-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396","io.kubernetes.cri.sandbox-name":"etcd-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"1009b601f1db86628e469db0a601cbf6"},"owner":"root"},{"ociVersion":"1.2.1","id":"ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22","pid":977,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22/rootfs","created":"2025-11-23T10:59:10.780522368Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quo
ta":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-no-preload-055571_85fe8bb1853930358150671c0b7a1d0a","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-no-preload-055571","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"85fe8bb1853930358150671c0b7a1d0a"},"owner":"root"}]
	I1123 10:59:10.857640 1801378 cri.go:126] list returned 5 containers
	I1123 10:59:10.857656 1801378 cri.go:129] container: {ID:5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664 Status:running}
	I1123 10:59:10.857675 1801378 cri.go:131] skipping 5730144535d60946fa3e1bf8ed88288ea83687c1f61cd3a49e62b23f39893664 - not in ps
	I1123 10:59:10.857680 1801378 cri.go:129] container: {ID:b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0 Status:running}
	I1123 10:59:10.857685 1801378 cri.go:131] skipping b76f936c58b5df6b1db1171bef40a08382743652e41a0cc9e122765f45d0ddf0 - not in ps
	I1123 10:59:10.857696 1801378 cri.go:129] container: {ID:b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396 Status:running}
	I1123 10:59:10.857700 1801378 cri.go:131] skipping b8291b3de4248e07e18664760ebf9b11674188d10a1e075ad8ad427eb7efb396 - not in ps
	I1123 10:59:10.857703 1801378 cri.go:129] container: {ID:d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83 Status:created}
	I1123 10:59:10.857709 1801378 cri.go:135] skipping {d0a172c0c690d2cdeb4d322eadc9e0dccbeb3993120248ab103026bdf9e9fc83 created}: state = "created", want "paused"
	I1123 10:59:10.857719 1801378 cri.go:129] container: {ID:ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22 Status:created}
	I1123 10:59:10.857746 1801378 cri.go:131] skipping ffb60493283cb8be96d1ecc33271f4a7bfdfdcef9a49278cab7a13392d4a2d22 - not in ps
	I1123 10:59:10.857811 1801378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 10:59:10.866782 1801378 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 10:59:10.866815 1801378 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 10:59:10.866878 1801378 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 10:59:10.878714 1801378 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 10:59:10.880264 1801378 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-055571" does not appear in /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:59:10.880825 1801378 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-1582671/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-055571" cluster setting kubeconfig missing "no-preload-055571" context setting]
	I1123 10:59:10.881619 1801378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:59:10.884809 1801378 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 10:59:10.907981 1801378 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 10:59:10.908018 1801378 kubeadm.go:602] duration metric: took 41.196108ms to restartPrimaryControlPlane
	I1123 10:59:10.908036 1801378 kubeadm.go:403] duration metric: took 158.935993ms to StartCluster
	I1123 10:59:10.908052 1801378 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:59:10.908124 1801378 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:59:10.909720 1801378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:59:10.910376 1801378 config.go:182] Loaded profile config "no-preload-055571": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:59:10.910153 1801378 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 10:59:10.910485 1801378 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 10:59:10.910771 1801378 addons.go:70] Setting storage-provisioner=true in profile "no-preload-055571"
	I1123 10:59:10.910788 1801378 addons.go:239] Setting addon storage-provisioner=true in "no-preload-055571"
	W1123 10:59:10.910795 1801378 addons.go:248] addon storage-provisioner should already be in state true
	I1123 10:59:10.910820 1801378 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:59:10.910868 1801378 addons.go:70] Setting default-storageclass=true in profile "no-preload-055571"
	I1123 10:59:10.910886 1801378 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-055571"
	I1123 10:59:10.911290 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.911344 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.911719 1801378 addons.go:70] Setting metrics-server=true in profile "no-preload-055571"
	I1123 10:59:10.911743 1801378 addons.go:239] Setting addon metrics-server=true in "no-preload-055571"
	W1123 10:59:10.911750 1801378 addons.go:248] addon metrics-server should already be in state true
	I1123 10:59:10.911782 1801378 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:59:10.912282 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.915875 1801378 out.go:179] * Verifying Kubernetes components...
	I1123 10:59:10.916108 1801378 addons.go:70] Setting dashboard=true in profile "no-preload-055571"
	I1123 10:59:10.916127 1801378 addons.go:239] Setting addon dashboard=true in "no-preload-055571"
	W1123 10:59:10.916151 1801378 addons.go:248] addon dashboard should already be in state true
	I1123 10:59:10.916189 1801378 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:59:10.916647 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.921982 1801378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 10:59:10.982352 1801378 addons.go:239] Setting addon default-storageclass=true in "no-preload-055571"
	W1123 10:59:10.982376 1801378 addons.go:248] addon default-storageclass should already be in state true
	I1123 10:59:10.982400 1801378 host.go:66] Checking if "no-preload-055571" exists ...
	I1123 10:59:10.986833 1801378 cli_runner.go:164] Run: docker container inspect no-preload-055571 --format={{.State.Status}}
	I1123 10:59:10.991562 1801378 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 10:59:10.994588 1801378 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 10:59:10.994610 1801378 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 10:59:10.994694 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:10.995101 1801378 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 10:59:10.998072 1801378 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:59:10.998147 1801378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 10:59:10.998278 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:11.037070 1801378 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 10:59:11.037236 1801378 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 10:59:11.037250 1801378 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 10:59:11.037311 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:11.047479 1801378 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 10:59:11.055235 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 10:59:11.055267 1801378 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 10:59:11.055346 1801378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-055571
	I1123 10:59:11.064685 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:11.105248 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:11.118415 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:11.119561 1801378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35274 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/no-preload-055571/id_rsa Username:docker}
	I1123 10:59:11.312012 1801378 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 10:59:11.416683 1801378 node_ready.go:35] waiting up to 6m0s for node "no-preload-055571" to be "Ready" ...
	I1123 10:59:11.466961 1801378 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 10:59:11.467039 1801378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 10:59:11.530787 1801378 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 10:59:11.530864 1801378 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 10:59:11.564816 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 10:59:11.564879 1801378 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 10:59:11.604647 1801378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 10:59:11.645194 1801378 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 10:59:11.645272 1801378 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 10:59:11.657606 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 10:59:11.657683 1801378 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 10:59:11.693475 1801378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 10:59:11.779843 1801378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 10:59:11.781755 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 10:59:11.781816 1801378 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 10:59:11.847299 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 10:59:11.847372 1801378 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 10:59:11.978390 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 10:59:11.978424 1801378 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 10:59:12.084291 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 10:59:12.084378 1801378 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 10:59:12.191841 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 10:59:12.191927 1801378 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 10:59:12.289103 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 10:59:12.289178 1801378 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 10:59:12.358422 1801378 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:59:12.358511 1801378 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 10:59:12.407244 1801378 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 10:59:15.343674 1801378 node_ready.go:49] node "no-preload-055571" is "Ready"
	I1123 10:59:15.343702 1801378 node_ready.go:38] duration metric: took 3.926915129s for node "no-preload-055571" to be "Ready" ...
	I1123 10:59:15.343717 1801378 api_server.go:52] waiting for apiserver process to appear ...
	I1123 10:59:15.343775 1801378 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:59:18.299860 1801378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.695142336s)
	I1123 10:59:18.299964 1801378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (6.606411044s)
	I1123 10:59:18.300109 1801378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.520186504s)
	I1123 10:59:18.300127 1801378 addons.go:495] Verifying addon metrics-server=true in "no-preload-055571"
	I1123 10:59:18.300224 1801378 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (5.892901712s)
	I1123 10:59:18.300252 1801378 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.956465494s)
	I1123 10:59:18.300291 1801378 api_server.go:72] duration metric: took 7.389841623s to wait for apiserver process to appear ...
	I1123 10:59:18.300304 1801378 api_server.go:88] waiting for apiserver healthz status ...
	I1123 10:59:18.300320 1801378 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 10:59:18.303325 1801378 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-055571 addons enable metrics-server
	
	I1123 10:59:18.310685 1801378 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 10:59:18.310713 1801378 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 10:59:18.317315 1801378 out.go:179] * Enabled addons: storage-provisioner, metrics-server, dashboard, default-storageclass
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	90081b0267e79       1611cd07b61d5       10 seconds ago       Running             busybox                   0                   a24840f1b442c       busybox                                      default
	3fd2478f130bc       138784d87c9c5       16 seconds ago       Running             coredns                   0                   b879e9f20c21e       coredns-66bc5c9577-pgvtk                     kube-system
	2f5d06eedabf2       ba04bb24b9575       16 seconds ago       Running             storage-provisioner       0                   d19bce664a79c       storage-provisioner                          kube-system
	8c09c9a1e0dd4       b1a8c6f707935       57 seconds ago       Running             kindnet-cni               0                   d7cdde11f0797       kindnet-969gr                                kube-system
	9947f7108490a       05baa95f5142d       57 seconds ago       Running             kube-proxy                0                   16cda86e8fa3e       kube-proxy-dsz2q                             kube-system
	c1b9d044a1c2b       a1894772a478e       About a minute ago   Running             etcd                      0                   e5cb3eab2523f       etcd-embed-certs-969029                      kube-system
	d97351f624889       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   ca95abfb5d4cd       kube-apiserver-embed-certs-969029            kube-system
	88e7145750dc1       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   374fc6e71ba48       kube-controller-manager-embed-certs-969029   kube-system
	bf863fc5d6205       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   006b36ea1d9a3       kube-scheduler-embed-certs-969029            kube-system
	
	
	==> containerd <==
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.768335203Z" level=info msg="Container 3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.792307825Z" level=info msg="CreateContainer within sandbox \"d19bce664a79c1e8fc8fa446843807402d9198cd5d03a05a1a67df00e2c33fc1\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052\""
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.795767038Z" level=info msg="StartContainer for \"2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052\""
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.798617725Z" level=info msg="connecting to shim 2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052" address="unix:///run/containerd/s/e52f12b56e3ca00f2027fc609d3a20e8a314e50765fbc07209fc2649bc48e4b6" protocol=ttrpc version=3
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.800878855Z" level=info msg="CreateContainer within sandbox \"b879e9f20c21e8faa9c7387df36c30c425862429584526369261f0be3a746252\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5\""
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.801611006Z" level=info msg="StartContainer for \"3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5\""
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.802783678Z" level=info msg="connecting to shim 3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5" address="unix:///run/containerd/s/f4b598774fbad6b7ef416c6e46d2560911a8c368f3455e3ced7aaf8d57e82073" protocol=ttrpc version=3
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.946704019Z" level=info msg="StartContainer for \"2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052\" returns successfully"
	Nov 23 10:59:06 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:06.951051277Z" level=info msg="StartContainer for \"3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5\" returns successfully"
	Nov 23 10:59:10 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:10.219063464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:976d8660-27e9-4d64-bcea-5f2857bfbd4f,Namespace:default,Attempt:0,}"
	Nov 23 10:59:10 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:10.295720346Z" level=info msg="connecting to shim a24840f1b442cc75d449f58849a383fc3412ee403b88ea57012f0ee012264e5e" address="unix:///run/containerd/s/3846421d9eab7c280387efdcbe0b3c79b30e653055d9e15f35fb3938b331c676" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 10:59:10 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:10.406862362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:976d8660-27e9-4d64-bcea-5f2857bfbd4f,Namespace:default,Attempt:0,} returns sandbox id \"a24840f1b442cc75d449f58849a383fc3412ee403b88ea57012f0ee012264e5e\""
	Nov 23 10:59:10 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:10.412449195Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.547131114Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.550353033Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.552971250Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.557112891Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.558551617Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.144689533s"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.558707092Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.573733808Z" level=info msg="CreateContainer within sandbox \"a24840f1b442cc75d449f58849a383fc3412ee403b88ea57012f0ee012264e5e\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.586546785Z" level=info msg="Container 90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.598709202Z" level=info msg="CreateContainer within sandbox \"a24840f1b442cc75d449f58849a383fc3412ee403b88ea57012f0ee012264e5e\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58\""
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.601803813Z" level=info msg="StartContainer for \"90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58\""
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.606078145Z" level=info msg="connecting to shim 90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58" address="unix:///run/containerd/s/3846421d9eab7c280387efdcbe0b3c79b30e653055d9e15f35fb3938b331c676" protocol=ttrpc version=3
	Nov 23 10:59:12 embed-certs-969029 containerd[761]: time="2025-11-23T10:59:12.730705441Z" level=info msg="StartContainer for \"90081b0267e793738de3a749e59d0b43bd1e2a01df03b776c0c18a6629df7a58\" returns successfully"
	
	
	==> coredns [3fd2478f130bcecfb9bacc64d52ec72a68fa484b8bdaab8bd5142618eb7a6bd5] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:58600 - 19918 "HINFO IN 5048464527559782916.8105685143490074977. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029402594s
	
	
	==> describe nodes <==
	Name:               embed-certs-969029
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-969029
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=embed-certs-969029
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T10_58_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 10:58:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-969029
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 10:59:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 10:59:21 +0000   Sun, 23 Nov 2025 10:58:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 10:59:21 +0000   Sun, 23 Nov 2025 10:58:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 10:59:21 +0000   Sun, 23 Nov 2025 10:58:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 10:59:21 +0000   Sun, 23 Nov 2025 10:59:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-969029
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                60a0da82-30dc-42f4-8f94-24e171ac05b5
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-pgvtk                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59s
	  kube-system                 etcd-embed-certs-969029                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-969gr                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      59s
	  kube-system                 kube-apiserver-embed-certs-969029             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-embed-certs-969029    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-dsz2q                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-embed-certs-969029             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Normal   NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 73s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 73s)  kubelet          Node embed-certs-969029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    73s (x8 over 73s)  kubelet          Node embed-certs-969029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node embed-certs-969029 status is now: NodeHasSufficientPID
	  Normal   Starting                 73s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 63s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 63s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node embed-certs-969029 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node embed-certs-969029 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node embed-certs-969029 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                node-controller  Node embed-certs-969029 event: Registered Node embed-certs-969029 in Controller
	  Normal   NodeReady                17s                kubelet          Node embed-certs-969029 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:50] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [c1b9d044a1c2bc1b727f46490e6f0d365dc6c431bca64ca89948b479a95835df] <==
	{"level":"warn","ts":"2025-11-23T10:58:15.153270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.178163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.201661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.213729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.232030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.254157Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.275633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.300272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.310438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.328584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.372987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.417438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.430254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.456657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.472216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59022","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.492620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.528164Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.543980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.557179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.582219Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.597244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.614524Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.631272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.644078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T10:58:15.727084Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59180","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:59:23 up 11:41,  0 user,  load average: 3.28, 3.19, 2.88
	Linux embed-certs-969029 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8c09c9a1e0dd488c3ae89f22758bdfbc3ffc7ede552c1c94e903d4ace20016cc] <==
	I1123 10:58:25.962761       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 10:58:25.963037       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 10:58:26.027308       1 main.go:148] setting mtu 1500 for CNI 
	I1123 10:58:26.027347       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 10:58:26.027376       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T10:58:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 10:58:26.136277       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 10:58:26.136302       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 10:58:26.136310       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 10:58:26.229476       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 10:58:56.136841       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 10:58:56.136843       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 10:58:56.229467       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 10:58:56.229490       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 10:58:57.736487       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 10:58:57.736519       1 metrics.go:72] Registering metrics
	I1123 10:58:57.736737       1 controller.go:711] "Syncing nftables rules"
	I1123 10:59:06.143113       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:59:06.143173       1 main.go:301] handling current node
	I1123 10:59:16.137400       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 10:59:16.137525       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d97351f624889f17b6b6beb7b97f46e4761bff0db0ac24e7478af3bcafd0c577] <==
	I1123 10:58:16.793927       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 10:58:16.794338       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 10:58:16.795439       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 10:58:16.796590       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:16.797287       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 10:58:16.808170       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 10:58:16.809608       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:16.825497       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 10:58:17.391499       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 10:58:17.400812       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 10:58:17.400835       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 10:58:18.483921       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 10:58:18.540991       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 10:58:18.654280       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 10:58:18.720232       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 10:58:18.737182       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 10:58:18.738861       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 10:58:18.748796       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 10:58:19.816378       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 10:58:19.841132       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 10:58:19.863171       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 10:58:24.401073       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 10:58:24.550995       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:24.558677       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 10:58:24.597860       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [88e7145750dc13e96e110d81fec6e8e8687bddaf3b752577c8c8542c93a7af25] <==
	I1123 10:58:23.705030       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 10:58:23.705144       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-969029"
	I1123 10:58:23.705216       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 10:58:23.705279       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 10:58:23.705371       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 10:58:23.705455       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 10:58:23.705478       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 10:58:23.705553       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 10:58:23.705627       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 10:58:23.706268       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 10:58:23.706550       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 10:58:23.706841       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 10:58:23.706869       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 10:58:23.707211       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 10:58:23.707249       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 10:58:23.707738       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 10:58:23.708579       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 10:58:23.710207       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 10:58:23.710551       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 10:58:23.712636       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 10:58:23.718628       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 10:58:23.730148       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 10:58:23.742596       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 10:58:23.744618       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 10:59:08.712384       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9947f7108490a4550e6b3803b512a0e2c01bf5577c5ff272a044aae4140be053] <==
	I1123 10:58:25.905556       1 server_linux.go:53] "Using iptables proxy"
	I1123 10:58:26.035701       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 10:58:26.144788       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 10:58:26.144831       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 10:58:26.144906       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 10:58:26.340834       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 10:58:26.341397       1 server_linux.go:132] "Using iptables Proxier"
	I1123 10:58:26.353098       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 10:58:26.353629       1 server.go:527] "Version info" version="v1.34.1"
	I1123 10:58:26.354073       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 10:58:26.356820       1 config.go:200] "Starting service config controller"
	I1123 10:58:26.356977       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 10:58:26.357173       1 config.go:106] "Starting endpoint slice config controller"
	I1123 10:58:26.358003       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 10:58:26.358153       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 10:58:26.358239       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 10:58:26.359756       1 config.go:309] "Starting node config controller"
	I1123 10:58:26.359877       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 10:58:26.359969       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 10:58:26.457184       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 10:58:26.458437       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 10:58:26.458451       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [bf863fc5d6205f9bf643fb75c7033ef0cd9446ec28cb1351f790d8085f7a4125] <==
	E1123 10:58:16.778047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 10:58:16.783841       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 10:58:16.783913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 10:58:16.783971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 10:58:16.784011       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:58:16.784046       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:58:16.784167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:58:16.784579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:58:16.785269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:58:16.785325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 10:58:16.789402       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 10:58:17.587049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 10:58:17.591652       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 10:58:17.622454       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 10:58:17.795380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 10:58:17.825272       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 10:58:17.897628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 10:58:17.925715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 10:58:17.926091       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 10:58:17.956904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 10:58:17.959394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 10:58:18.040383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 10:58:18.084219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 10:58:18.134640       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1123 10:58:20.842499       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 10:58:21 embed-certs-969029 kubelet[1470]: I1123 10:58:21.360514    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-969029" podStartSLOduration=1.3604864700000001 podStartE2EDuration="1.36048647s" podCreationTimestamp="2025-11-23 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:21.326087755 +0000 UTC m=+1.561526613" watchObservedRunningTime="2025-11-23 10:58:21.36048647 +0000 UTC m=+1.595925328"
	Nov 23 10:58:21 embed-certs-969029 kubelet[1470]: I1123 10:58:21.380137    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-969029" podStartSLOduration=1.380117224 podStartE2EDuration="1.380117224s" podCreationTimestamp="2025-11-23 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:21.36095027 +0000 UTC m=+1.596389145" watchObservedRunningTime="2025-11-23 10:58:21.380117224 +0000 UTC m=+1.615556081"
	Nov 23 10:58:21 embed-certs-969029 kubelet[1470]: I1123 10:58:21.423595    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-969029" podStartSLOduration=1.42357606 podStartE2EDuration="1.42357606s" podCreationTimestamp="2025-11-23 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:21.380387371 +0000 UTC m=+1.615826237" watchObservedRunningTime="2025-11-23 10:58:21.42357606 +0000 UTC m=+1.659014926"
	Nov 23 10:58:21 embed-certs-969029 kubelet[1470]: I1123 10:58:21.423718    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-969029" podStartSLOduration=1.423712672 podStartE2EDuration="1.423712672s" podCreationTimestamp="2025-11-23 10:58:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:21.423242825 +0000 UTC m=+1.658681789" watchObservedRunningTime="2025-11-23 10:58:21.423712672 +0000 UTC m=+1.659151530"
	Nov 23 10:58:23 embed-certs-969029 kubelet[1470]: I1123 10:58:23.678668    1470 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 10:58:23 embed-certs-969029 kubelet[1470]: I1123 10:58:23.679698    1470 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750089    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/002c9f0c-528d-4eed-b241-435de51af248-xtables-lock\") pod \"kube-proxy-dsz2q\" (UID: \"002c9f0c-528d-4eed-b241-435de51af248\") " pod="kube-system/kube-proxy-dsz2q"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750144    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/da716ec2-e4e8-4663-a452-0c9925b721e1-cni-cfg\") pod \"kindnet-969gr\" (UID: \"da716ec2-e4e8-4663-a452-0c9925b721e1\") " pod="kube-system/kindnet-969gr"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750169    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da716ec2-e4e8-4663-a452-0c9925b721e1-xtables-lock\") pod \"kindnet-969gr\" (UID: \"da716ec2-e4e8-4663-a452-0c9925b721e1\") " pod="kube-system/kindnet-969gr"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750189    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmbsj\" (UniqueName: \"kubernetes.io/projected/da716ec2-e4e8-4663-a452-0c9925b721e1-kube-api-access-qmbsj\") pod \"kindnet-969gr\" (UID: \"da716ec2-e4e8-4663-a452-0c9925b721e1\") " pod="kube-system/kindnet-969gr"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750236    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da716ec2-e4e8-4663-a452-0c9925b721e1-lib-modules\") pod \"kindnet-969gr\" (UID: \"da716ec2-e4e8-4663-a452-0c9925b721e1\") " pod="kube-system/kindnet-969gr"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750268    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/002c9f0c-528d-4eed-b241-435de51af248-kube-proxy\") pod \"kube-proxy-dsz2q\" (UID: \"002c9f0c-528d-4eed-b241-435de51af248\") " pod="kube-system/kube-proxy-dsz2q"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750288    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/002c9f0c-528d-4eed-b241-435de51af248-lib-modules\") pod \"kube-proxy-dsz2q\" (UID: \"002c9f0c-528d-4eed-b241-435de51af248\") " pod="kube-system/kube-proxy-dsz2q"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.750311    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4djn5\" (UniqueName: \"kubernetes.io/projected/002c9f0c-528d-4eed-b241-435de51af248-kube-api-access-4djn5\") pod \"kube-proxy-dsz2q\" (UID: \"002c9f0c-528d-4eed-b241-435de51af248\") " pod="kube-system/kube-proxy-dsz2q"
	Nov 23 10:58:24 embed-certs-969029 kubelet[1470]: I1123 10:58:24.939373    1470 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 10:58:26 embed-certs-969029 kubelet[1470]: I1123 10:58:26.359230    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-969gr" podStartSLOduration=2.359170578 podStartE2EDuration="2.359170578s" podCreationTimestamp="2025-11-23 10:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:26.332544418 +0000 UTC m=+6.567983276" watchObservedRunningTime="2025-11-23 10:58:26.359170578 +0000 UTC m=+6.594609444"
	Nov 23 10:58:30 embed-certs-969029 kubelet[1470]: I1123 10:58:30.188303    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dsz2q" podStartSLOduration=6.188284239 podStartE2EDuration="6.188284239s" podCreationTimestamp="2025-11-23 10:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:58:26.365445929 +0000 UTC m=+6.600884795" watchObservedRunningTime="2025-11-23 10:58:30.188284239 +0000 UTC m=+10.423723105"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.237893    1470 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.456332    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/dec18915-2717-4390-96b8-95f56ec7405f-tmp\") pod \"storage-provisioner\" (UID: \"dec18915-2717-4390-96b8-95f56ec7405f\") " pod="kube-system/storage-provisioner"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.456389    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4s9m\" (UniqueName: \"kubernetes.io/projected/dec18915-2717-4390-96b8-95f56ec7405f-kube-api-access-f4s9m\") pod \"storage-provisioner\" (UID: \"dec18915-2717-4390-96b8-95f56ec7405f\") " pod="kube-system/storage-provisioner"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.456414    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81912f6a-cbf4-4bd7-84ac-ca2ffc36269c-config-volume\") pod \"coredns-66bc5c9577-pgvtk\" (UID: \"81912f6a-cbf4-4bd7-84ac-ca2ffc36269c\") " pod="kube-system/coredns-66bc5c9577-pgvtk"
	Nov 23 10:59:06 embed-certs-969029 kubelet[1470]: I1123 10:59:06.456432    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bkzr\" (UniqueName: \"kubernetes.io/projected/81912f6a-cbf4-4bd7-84ac-ca2ffc36269c-kube-api-access-5bkzr\") pod \"coredns-66bc5c9577-pgvtk\" (UID: \"81912f6a-cbf4-4bd7-84ac-ca2ffc36269c\") " pod="kube-system/coredns-66bc5c9577-pgvtk"
	Nov 23 10:59:07 embed-certs-969029 kubelet[1470]: I1123 10:59:07.457438    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-pgvtk" podStartSLOduration=43.457421411 podStartE2EDuration="43.457421411s" podCreationTimestamp="2025-11-23 10:58:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:59:07.421228503 +0000 UTC m=+47.656667361" watchObservedRunningTime="2025-11-23 10:59:07.457421411 +0000 UTC m=+47.692860269"
	Nov 23 10:59:09 embed-certs-969029 kubelet[1470]: I1123 10:59:09.904707    1470 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=43.904686817 podStartE2EDuration="43.904686817s" podCreationTimestamp="2025-11-23 10:58:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 10:59:07.489088408 +0000 UTC m=+47.724527274" watchObservedRunningTime="2025-11-23 10:59:09.904686817 +0000 UTC m=+50.140125675"
	Nov 23 10:59:09 embed-certs-969029 kubelet[1470]: I1123 10:59:09.984948    1470 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz4k9\" (UniqueName: \"kubernetes.io/projected/976d8660-27e9-4d64-bcea-5f2857bfbd4f-kube-api-access-zz4k9\") pod \"busybox\" (UID: \"976d8660-27e9-4d64-bcea-5f2857bfbd4f\") " pod="default/busybox"
	
	
	==> storage-provisioner [2f5d06eedabf27faeb1d3c7a374ee174b2d38eb17a6ddaa747e6c3ba71437052] <==
	I1123 10:59:06.995610       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 10:59:07.001019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:07.010669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:59:07.010828       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 10:59:07.011004       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-969029_0feb64e6-d700-447f-8805-add762a268fd!
	I1123 10:59:07.012033       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c09876f5-d9bf-4563-886d-c5272f70f415", APIVersion:"v1", ResourceVersion:"461", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-969029_0feb64e6-d700-447f-8805-add762a268fd became leader
	W1123 10:59:07.017753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:07.023684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 10:59:07.112067       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-969029_0feb64e6-d700-447f-8805-add762a268fd!
	W1123 10:59:09.027407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:09.035065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:11.043788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:11.083856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:13.087753       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:13.094431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:15.098372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:15.107423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:17.112088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:17.122237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:19.126037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:19.131073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:21.134367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:21.144584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:23.151854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 10:59:23.157618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-969029 -n embed-certs-969029
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-969029 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.62s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-071466 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f8b4a07b-84e0-4042-b688-0f75fde332b2] Pending
helpers_test.go:352: "busybox" [f8b4a07b-84e0-4042-b688-0f75fde332b2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f8b4a07b-84e0-4042-b688-0f75fde332b2] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004073576s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-071466 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-071466
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-071466:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a",
	        "Created": "2025-11-23T11:00:19.52400508Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1809898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:00:19.581807861Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a/hostname",
	        "HostsPath": "/var/lib/docker/containers/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a/hosts",
	        "LogPath": "/var/lib/docker/containers/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a-json.log",
	        "Name": "/default-k8s-diff-port-071466",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-071466:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-071466",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a",
	                "LowerDir": "/var/lib/docker/overlay2/0bd94846ae3639b9505a2696918cdb1c3c3d22c2aac69987a1be43a2c988740a-init/diff:/var/lib/docker/overlay2/fe0bef51c968206096993e9a75db2143cd9cd74d56696a257291ce63f851a2d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bd94846ae3639b9505a2696918cdb1c3c3d22c2aac69987a1be43a2c988740a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bd94846ae3639b9505a2696918cdb1c3c3d22c2aac69987a1be43a2c988740a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bd94846ae3639b9505a2696918cdb1c3c3d22c2aac69987a1be43a2c988740a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-071466",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-071466/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-071466",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-071466",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-071466",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73ffca0e5ed3db4979fd483935711dc0dc0f7eb3edd65d044115e687b59538d1",
	            "SandboxKey": "/var/run/docker/netns/73ffca0e5ed3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35284"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35285"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35288"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35286"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35287"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-071466": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:50:42:3c:8c:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c99e598dbc50acc2b850b8fed135c2980d7494ab20f2df75ff9827da7f784687",
	                    "EndpointID": "e8e283486275522c1a16758d954b937539b4ddd8e6f1a5003fa968d8d5d87bd6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-071466",
	                        "c233f0259bfd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-071466 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-071466 logs -n 25: (1.991546236s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p no-preload-055571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 10:58 UTC │ 23 Nov 25 10:58 UTC │
	│ stop    │ -p no-preload-055571 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 10:58 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable dashboard -p no-preload-055571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ start   │ -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-969029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ stop    │ -p embed-certs-969029 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable dashboard -p embed-certs-969029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ start   │ -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 11:00 UTC │
	│ image   │ no-preload-055571 image list --format=json                                                                                                                                                                                                          │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ pause   │ -p no-preload-055571 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ unpause │ -p no-preload-055571 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p no-preload-055571                                                                                                                                                                                                                                │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p no-preload-055571                                                                                                                                                                                                                                │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p disable-driver-mounts-436374                                                                                                                                                                                                                     │ disable-driver-mounts-436374 │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ start   │ -p default-k8s-diff-port-071466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-071466 │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:01 UTC │
	│ image   │ embed-certs-969029 image list --format=json                                                                                                                                                                                                         │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ pause   │ -p embed-certs-969029 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ unpause │ -p embed-certs-969029 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p embed-certs-969029                                                                                                                                                                                                                               │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p embed-certs-969029                                                                                                                                                                                                                               │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ start   │ -p newest-cni-268828 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:01 UTC │
	│ addons  │ enable metrics-server -p newest-cni-268828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:01 UTC │ 23 Nov 25 11:01 UTC │
	│ stop    │ -p newest-cni-268828 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:01 UTC │ 23 Nov 25 11:01 UTC │
	│ addons  │ enable dashboard -p newest-cni-268828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:01 UTC │ 23 Nov 25 11:01 UTC │
	│ start   │ -p newest-cni-268828 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:01:33
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:01:33.311891 1816435 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:01:33.312062 1816435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:01:33.312093 1816435 out.go:374] Setting ErrFile to fd 2...
	I1123 11:01:33.312112 1816435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:01:33.312406 1816435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 11:01:33.312929 1816435 out.go:368] Setting JSON to false
	I1123 11:01:33.313971 1816435 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":42239,"bootTime":1763853455,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 11:01:33.314068 1816435 start.go:143] virtualization:  
	I1123 11:01:33.317425 1816435 out.go:179] * [newest-cni-268828] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:01:33.321143 1816435 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:01:33.321286 1816435 notify.go:221] Checking for updates...
	I1123 11:01:33.326775 1816435 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:01:33.329643 1816435 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 11:01:33.332481 1816435 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 11:01:33.335518 1816435 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:01:33.338438 1816435 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1123 11:01:29.443397 1809512 node_ready.go:57] node "default-k8s-diff-port-071466" has "Ready":"False" status (will retry)
	W1123 11:01:31.942936 1809512 node_ready.go:57] node "default-k8s-diff-port-071466" has "Ready":"False" status (will retry)
	I1123 11:01:33.341829 1816435 config.go:182] Loaded profile config "newest-cni-268828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 11:01:33.342487 1816435 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:01:33.377137 1816435 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:01:33.377264 1816435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:01:33.434711 1816435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:01:33.424867464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:01:33.434866 1816435 docker.go:319] overlay module found
	I1123 11:01:33.438035 1816435 out.go:179] * Using the docker driver based on existing profile
	I1123 11:01:33.440866 1816435 start.go:309] selected driver: docker
	I1123 11:01:33.440899 1816435 start.go:927] validating driver "docker" against &{Name:newest-cni-268828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:01:33.441143 1816435 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:01:33.441825 1816435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:01:33.502236 1816435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:01:33.492896941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:01:33.502580 1816435 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:01:33.502614 1816435 cni.go:84] Creating CNI manager for ""
	I1123 11:01:33.502678 1816435 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 11:01:33.502721 1816435 start.go:353] cluster config:
	{Name:newest-cni-268828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:01:33.505842 1816435 out.go:179] * Starting "newest-cni-268828" primary control-plane node in "newest-cni-268828" cluster
	I1123 11:01:33.508605 1816435 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 11:01:33.511498 1816435 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:01:33.514280 1816435 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:01:33.514245 1816435 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 11:01:33.514344 1816435 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 11:01:33.514355 1816435 cache.go:65] Caching tarball of preloaded images
	I1123 11:01:33.514434 1816435 preload.go:238] Found /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 11:01:33.514445 1816435 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 11:01:33.514563 1816435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/config.json ...
	I1123 11:01:33.536258 1816435 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:01:33.536283 1816435 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:01:33.536304 1816435 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:01:33.536335 1816435 start.go:360] acquireMachinesLock for newest-cni-268828: {Name:mk6fb61bd7d279f886e7ed4e66b2ff775ec57a78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:01:33.536408 1816435 start.go:364] duration metric: took 45.045µs to acquireMachinesLock for "newest-cni-268828"
	I1123 11:01:33.536432 1816435 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:01:33.536444 1816435 fix.go:54] fixHost starting: 
	I1123 11:01:33.536715 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:33.554152 1816435 fix.go:112] recreateIfNeeded on newest-cni-268828: state=Stopped err=<nil>
	W1123 11:01:33.554181 1816435 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 11:01:33.943071 1809512 node_ready.go:57] node "default-k8s-diff-port-071466" has "Ready":"False" status (will retry)
	I1123 11:01:34.442855 1809512 node_ready.go:49] node "default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:34.442889 1809512 node_ready.go:38] duration metric: took 40.002980398s for node "default-k8s-diff-port-071466" to be "Ready" ...
	I1123 11:01:34.442904 1809512 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:01:34.442975 1809512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:01:34.457080 1809512 api_server.go:72] duration metric: took 41.836360272s to wait for apiserver process to appear ...
	I1123 11:01:34.457105 1809512 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:01:34.457124 1809512 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 11:01:34.477066 1809512 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 11:01:34.478150 1809512 api_server.go:141] control plane version: v1.34.1
	I1123 11:01:34.478175 1809512 api_server.go:131] duration metric: took 21.064441ms to wait for apiserver health ...
	I1123 11:01:34.478184 1809512 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:01:34.481548 1809512 system_pods.go:59] 8 kube-system pods found
	I1123 11:01:34.481642 1809512 system_pods.go:61] "coredns-66bc5c9577-k6bmz" [44dabc1e-0b98-4250-861a-5992ede34070] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:01:34.481665 1809512 system_pods.go:61] "etcd-default-k8s-diff-port-071466" [c7f6bb44-f6a0-400b-8ef2-57e9bfa53d69] Running
	I1123 11:01:34.481701 1809512 system_pods.go:61] "kindnet-2wbs5" [5a1f31cd-7028-4474-ae74-b50d5307009a] Running
	I1123 11:01:34.481727 1809512 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071466" [718a0115-63c0-4917-a905-077d8428220c] Running
	I1123 11:01:34.481745 1809512 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071466" [aa29c3d0-a2b7-4c4c-a342-d4162fc5ac23] Running
	I1123 11:01:34.481780 1809512 system_pods.go:61] "kube-proxy-5zfbc" [ce0d571b-d10b-446a-8824-44e1566eb31f] Running
	I1123 11:01:34.481798 1809512 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071466" [d08fc5a3-8d83-4529-89c5-241765de3656] Running
	I1123 11:01:34.481829 1809512 system_pods.go:61] "storage-provisioner" [152263f3-c362-4790-a756-3d028b31e04a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:01:34.481859 1809512 system_pods.go:74] duration metric: took 3.668428ms to wait for pod list to return data ...
	I1123 11:01:34.481882 1809512 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:01:34.487142 1809512 default_sa.go:45] found service account: "default"
	I1123 11:01:34.487223 1809512 default_sa.go:55] duration metric: took 5.321596ms for default service account to be created ...
	I1123 11:01:34.487248 1809512 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:01:34.492109 1809512 system_pods.go:86] 8 kube-system pods found
	I1123 11:01:34.492143 1809512 system_pods.go:89] "coredns-66bc5c9577-k6bmz" [44dabc1e-0b98-4250-861a-5992ede34070] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:01:34.492150 1809512 system_pods.go:89] "etcd-default-k8s-diff-port-071466" [c7f6bb44-f6a0-400b-8ef2-57e9bfa53d69] Running
	I1123 11:01:34.492157 1809512 system_pods.go:89] "kindnet-2wbs5" [5a1f31cd-7028-4474-ae74-b50d5307009a] Running
	I1123 11:01:34.492161 1809512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071466" [718a0115-63c0-4917-a905-077d8428220c] Running
	I1123 11:01:34.492166 1809512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071466" [aa29c3d0-a2b7-4c4c-a342-d4162fc5ac23] Running
	I1123 11:01:34.492170 1809512 system_pods.go:89] "kube-proxy-5zfbc" [ce0d571b-d10b-446a-8824-44e1566eb31f] Running
	I1123 11:01:34.492175 1809512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071466" [d08fc5a3-8d83-4529-89c5-241765de3656] Running
	I1123 11:01:34.492181 1809512 system_pods.go:89] "storage-provisioner" [152263f3-c362-4790-a756-3d028b31e04a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:01:34.492203 1809512 retry.go:31] will retry after 238.611806ms: missing components: kube-dns
	I1123 11:01:34.735870 1809512 system_pods.go:86] 8 kube-system pods found
	I1123 11:01:34.735953 1809512 system_pods.go:89] "coredns-66bc5c9577-k6bmz" [44dabc1e-0b98-4250-861a-5992ede34070] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:01:34.735977 1809512 system_pods.go:89] "etcd-default-k8s-diff-port-071466" [c7f6bb44-f6a0-400b-8ef2-57e9bfa53d69] Running
	I1123 11:01:34.735999 1809512 system_pods.go:89] "kindnet-2wbs5" [5a1f31cd-7028-4474-ae74-b50d5307009a] Running
	I1123 11:01:34.736018 1809512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071466" [718a0115-63c0-4917-a905-077d8428220c] Running
	I1123 11:01:34.736046 1809512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071466" [aa29c3d0-a2b7-4c4c-a342-d4162fc5ac23] Running
	I1123 11:01:34.736066 1809512 system_pods.go:89] "kube-proxy-5zfbc" [ce0d571b-d10b-446a-8824-44e1566eb31f] Running
	I1123 11:01:34.736086 1809512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071466" [d08fc5a3-8d83-4529-89c5-241765de3656] Running
	I1123 11:01:34.736105 1809512 system_pods.go:89] "storage-provisioner" [152263f3-c362-4790-a756-3d028b31e04a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:01:34.736153 1809512 retry.go:31] will retry after 240.954117ms: missing components: kube-dns
	I1123 11:01:34.982245 1809512 system_pods.go:86] 8 kube-system pods found
	I1123 11:01:34.982280 1809512 system_pods.go:89] "coredns-66bc5c9577-k6bmz" [44dabc1e-0b98-4250-861a-5992ede34070] Running
	I1123 11:01:34.982288 1809512 system_pods.go:89] "etcd-default-k8s-diff-port-071466" [c7f6bb44-f6a0-400b-8ef2-57e9bfa53d69] Running
	I1123 11:01:34.982295 1809512 system_pods.go:89] "kindnet-2wbs5" [5a1f31cd-7028-4474-ae74-b50d5307009a] Running
	I1123 11:01:34.982299 1809512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071466" [718a0115-63c0-4917-a905-077d8428220c] Running
	I1123 11:01:34.982304 1809512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071466" [aa29c3d0-a2b7-4c4c-a342-d4162fc5ac23] Running
	I1123 11:01:34.982308 1809512 system_pods.go:89] "kube-proxy-5zfbc" [ce0d571b-d10b-446a-8824-44e1566eb31f] Running
	I1123 11:01:34.982312 1809512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071466" [d08fc5a3-8d83-4529-89c5-241765de3656] Running
	I1123 11:01:34.982316 1809512 system_pods.go:89] "storage-provisioner" [152263f3-c362-4790-a756-3d028b31e04a] Running
	I1123 11:01:34.982324 1809512 system_pods.go:126] duration metric: took 495.058934ms to wait for k8s-apps to be running ...
	I1123 11:01:34.982335 1809512 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:01:34.982398 1809512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:01:34.997191 1809512 system_svc.go:56] duration metric: took 14.846942ms WaitForService to wait for kubelet
	I1123 11:01:34.997225 1809512 kubeadm.go:587] duration metric: took 42.376509466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:01:34.997256 1809512 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:01:35.000490 1809512 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:01:35.000532 1809512 node_conditions.go:123] node cpu capacity is 2
	I1123 11:01:35.000546 1809512 node_conditions.go:105] duration metric: took 3.284815ms to run NodePressure ...
	I1123 11:01:35.000559 1809512 start.go:242] waiting for startup goroutines ...
	I1123 11:01:35.000567 1809512 start.go:247] waiting for cluster config update ...
	I1123 11:01:35.000580 1809512 start.go:256] writing updated cluster config ...
	I1123 11:01:35.000940 1809512 ssh_runner.go:195] Run: rm -f paused
	I1123 11:01:35.006421 1809512 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:01:35.011726 1809512 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k6bmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.017520 1809512 pod_ready.go:94] pod "coredns-66bc5c9577-k6bmz" is "Ready"
	I1123 11:01:35.017550 1809512 pod_ready.go:86] duration metric: took 5.791394ms for pod "coredns-66bc5c9577-k6bmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.030638 1809512 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.036565 1809512 pod_ready.go:94] pod "etcd-default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:35.036592 1809512 pod_ready.go:86] duration metric: took 5.927431ms for pod "etcd-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.039543 1809512 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.044766 1809512 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:35.044841 1809512 pod_ready.go:86] duration metric: took 5.225484ms for pod "kube-apiserver-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.047859 1809512 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.412634 1809512 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:35.412709 1809512 pod_ready.go:86] duration metric: took 364.784645ms for pod "kube-controller-manager-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.616949 1809512 pod_ready.go:83] waiting for pod "kube-proxy-5zfbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:36.012895 1809512 pod_ready.go:94] pod "kube-proxy-5zfbc" is "Ready"
	I1123 11:01:36.012925 1809512 pod_ready.go:86] duration metric: took 395.901246ms for pod "kube-proxy-5zfbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:36.210969 1809512 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:36.615932 1809512 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:36.616006 1809512 pod_ready.go:86] duration metric: took 405.011338ms for pod "kube-scheduler-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:36.616033 1809512 pod_ready.go:40] duration metric: took 1.609556549s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:01:36.671404 1809512 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:01:36.674879 1809512 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-071466" cluster and "default" namespace by default
	I1123 11:01:33.557482 1816435 out.go:252] * Restarting existing docker container for "newest-cni-268828" ...
	I1123 11:01:33.557570 1816435 cli_runner.go:164] Run: docker start newest-cni-268828
	I1123 11:01:33.841633 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:33.866430 1816435 kic.go:430] container "newest-cni-268828" state is running.
	I1123 11:01:33.866795 1816435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-268828
	I1123 11:01:33.890875 1816435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/config.json ...
	I1123 11:01:33.891106 1816435 machine.go:94] provisionDockerMachine start ...
	I1123 11:01:33.891166 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:33.911488 1816435 main.go:143] libmachine: Using SSH client type: native
	I1123 11:01:33.911813 1816435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35294 <nil> <nil>}
	I1123 11:01:33.911822 1816435 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:01:33.912596 1816435 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:01:37.075016 1816435 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-268828
	
	I1123 11:01:37.075045 1816435 ubuntu.go:182] provisioning hostname "newest-cni-268828"
	I1123 11:01:37.075108 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.094156 1816435 main.go:143] libmachine: Using SSH client type: native
	I1123 11:01:37.094461 1816435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35294 <nil> <nil>}
	I1123 11:01:37.094475 1816435 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-268828 && echo "newest-cni-268828" | sudo tee /etc/hostname
	I1123 11:01:37.261972 1816435 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-268828
	
	I1123 11:01:37.262053 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.288040 1816435 main.go:143] libmachine: Using SSH client type: native
	I1123 11:01:37.288392 1816435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35294 <nil> <nil>}
	I1123 11:01:37.288415 1816435 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-268828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-268828/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-268828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:01:37.443255 1816435 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:01:37.443281 1816435 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-1582671/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-1582671/.minikube}
	I1123 11:01:37.443304 1816435 ubuntu.go:190] setting up certificates
	I1123 11:01:37.443319 1816435 provision.go:84] configureAuth start
	I1123 11:01:37.443379 1816435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-268828
	I1123 11:01:37.461370 1816435 provision.go:143] copyHostCerts
	I1123 11:01:37.461442 1816435 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem, removing ...
	I1123 11:01:37.461461 1816435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem
	I1123 11:01:37.461543 1816435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem (1078 bytes)
	I1123 11:01:37.461642 1816435 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem, removing ...
	I1123 11:01:37.461651 1816435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem
	I1123 11:01:37.461678 1816435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem (1123 bytes)
	I1123 11:01:37.461739 1816435 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem, removing ...
	I1123 11:01:37.461747 1816435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem
	I1123 11:01:37.461772 1816435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem (1675 bytes)
	I1123 11:01:37.461821 1816435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem org=jenkins.newest-cni-268828 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-268828]
	I1123 11:01:37.526677 1816435 provision.go:177] copyRemoteCerts
	I1123 11:01:37.526741 1816435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:01:37.526825 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.543571 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:37.650790 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 11:01:37.669226 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 11:01:37.686635 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:01:37.705140 1816435 provision.go:87] duration metric: took 261.778989ms to configureAuth
	I1123 11:01:37.705217 1816435 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:01:37.705476 1816435 config.go:182] Loaded profile config "newest-cni-268828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 11:01:37.705506 1816435 machine.go:97] duration metric: took 3.814390678s to provisionDockerMachine
	I1123 11:01:37.705536 1816435 start.go:293] postStartSetup for "newest-cni-268828" (driver="docker")
	I1123 11:01:37.705559 1816435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:01:37.705635 1816435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:01:37.705706 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.723642 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:37.826998 1816435 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:01:37.830376 1816435 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:01:37.830408 1816435 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:01:37.830425 1816435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/addons for local assets ...
	I1123 11:01:37.830480 1816435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/files for local assets ...
	I1123 11:01:37.830557 1816435 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem -> 15845322.pem in /etc/ssl/certs
	I1123 11:01:37.830661 1816435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:01:37.838288 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 11:01:37.857108 1816435 start.go:296] duration metric: took 151.542897ms for postStartSetup
	I1123 11:01:37.857230 1816435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:01:37.857304 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.875768 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:37.975988 1816435 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:01:37.980793 1816435 fix.go:56] duration metric: took 4.444342897s for fixHost
	I1123 11:01:37.980817 1816435 start.go:83] releasing machines lock for "newest-cni-268828", held for 4.444396344s
	I1123 11:01:37.980888 1816435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-268828
	I1123 11:01:37.999505 1816435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:01:37.999673 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.999875 1816435 ssh_runner.go:195] Run: cat /version.json
	I1123 11:01:37.999914 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:38.039003 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:38.046071 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:38.146855 1816435 ssh_runner.go:195] Run: systemctl --version
	I1123 11:01:38.243039 1816435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:01:38.247701 1816435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:01:38.247778 1816435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:01:38.255492 1816435 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:01:38.255527 1816435 start.go:496] detecting cgroup driver to use...
	I1123 11:01:38.255558 1816435 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:01:38.255607 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 11:01:38.273649 1816435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 11:01:38.287140 1816435 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:01:38.287252 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:01:38.311748 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:01:38.331314 1816435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:01:38.458173 1816435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:01:38.573731 1816435 docker.go:234] disabling docker service ...
	I1123 11:01:38.573792 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:01:38.588674 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:01:38.601633 1816435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:01:38.726157 1816435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:01:38.850384 1816435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:01:38.863370 1816435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:01:38.879007 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 11:01:38.891891 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 11:01:38.901962 1816435 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 11:01:38.902078 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 11:01:38.910984 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 11:01:38.919868 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 11:01:38.929246 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 11:01:38.938639 1816435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:01:38.947089 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 11:01:38.956207 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 11:01:38.965309 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 11:01:38.974859 1816435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:01:38.982715 1816435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:01:38.989922 1816435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:01:39.109771 1816435 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 11:01:39.247285 1816435 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 11:01:39.247362 1816435 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 11:01:39.251309 1816435 start.go:564] Will wait 60s for crictl version
	I1123 11:01:39.251420 1816435 ssh_runner.go:195] Run: which crictl
	I1123 11:01:39.254893 1816435 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:01:39.286098 1816435 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 11:01:39.286219 1816435 ssh_runner.go:195] Run: containerd --version
	I1123 11:01:39.307296 1816435 ssh_runner.go:195] Run: containerd --version
	I1123 11:01:39.331081 1816435 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 11:01:39.334184 1816435 cli_runner.go:164] Run: docker network inspect newest-cni-268828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:01:39.350249 1816435 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:01:39.354062 1816435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:01:39.367134 1816435 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 11:01:39.370209 1816435 kubeadm.go:884] updating cluster {Name:newest-cni-268828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:01:39.370372 1816435 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 11:01:39.370462 1816435 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:01:39.396326 1816435 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 11:01:39.396355 1816435 containerd.go:534] Images already preloaded, skipping extraction
	I1123 11:01:39.396418 1816435 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:01:39.423162 1816435 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 11:01:39.423214 1816435 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:01:39.423223 1816435 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 11:01:39.423373 1816435 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-268828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:01:39.423470 1816435 ssh_runner.go:195] Run: sudo crictl info
	I1123 11:01:39.451640 1816435 cni.go:84] Creating CNI manager for ""
	I1123 11:01:39.451662 1816435 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 11:01:39.451682 1816435 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 11:01:39.451734 1816435 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-268828 NodeName:newest-cni-268828 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:01:39.451897 1816435 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-268828"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:01:39.451985 1816435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:01:39.459938 1816435 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:01:39.460008 1816435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:01:39.468043 1816435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 11:01:39.485655 1816435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:01:39.502843 1816435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1123 11:01:39.520387 1816435 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:01:39.524799 1816435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:01:39.536062 1816435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:01:39.712054 1816435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:01:39.735856 1816435 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828 for IP: 192.168.76.2
	I1123 11:01:39.735926 1816435 certs.go:195] generating shared ca certs ...
	I1123 11:01:39.735958 1816435 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:01:39.736132 1816435 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 11:01:39.736219 1816435 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 11:01:39.736252 1816435 certs.go:257] generating profile certs ...
	I1123 11:01:39.736392 1816435 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/client.key
	I1123 11:01:39.736504 1816435 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/apiserver.key.ebdf4d7d
	I1123 11:01:39.736596 1816435 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/proxy-client.key
	I1123 11:01:39.736754 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 11:01:39.736826 1816435 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 11:01:39.736858 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:01:39.736915 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 11:01:39.736975 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:01:39.737039 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 11:01:39.737125 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 11:01:39.737864 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:01:39.774012 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:01:39.793521 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:01:39.813381 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:01:39.833798 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 11:01:39.855901 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:01:39.886615 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:01:39.922698 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 11:01:39.952248 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 11:01:39.974127 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 11:01:39.993397 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:01:40.025743 1816435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:01:40.049830 1816435 ssh_runner.go:195] Run: openssl version
	I1123 11:01:40.058394 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 11:01:40.068314 1816435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 11:01:40.073301 1816435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 11:01:40.073402 1816435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 11:01:40.117652 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:01:40.126193 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:01:40.136591 1816435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:01:40.140664 1816435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:01:40.140771 1816435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:01:40.184569 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:01:40.193180 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 11:01:40.201762 1816435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 11:01:40.205608 1816435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 11:01:40.205666 1816435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 11:01:40.247143 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 11:01:40.255258 1816435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:01:40.259403 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:01:40.301498 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:01:40.344708 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:01:40.395988 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:01:40.464440 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:01:40.535122 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:01:40.592291 1816435 kubeadm.go:401] StartCluster: {Name:newest-cni-268828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:01:40.592447 1816435 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 11:01:40.592561 1816435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:01:40.628734 1816435 cri.go:89] found id: "9b9c5695adc9b34096e7b1153be0af91f64a7358197c4136e6d7cdcb8ee356e7"
	I1123 11:01:40.628806 1816435 cri.go:89] found id: "1e88c729158018c727fa4042547a53fdff2079f1c7ebbb4769d5f7469b29080a"
	I1123 11:01:40.628824 1816435 cri.go:89] found id: "5644a1b40970e031bf73f550647dabd25a735da17ddb0357563e863d6b483b68"
	I1123 11:01:40.628842 1816435 cri.go:89] found id: "e75e6d08ee4d03d06b9fd772ec785f63b5cac83213afac2a42cf9026fc4779a9"
	I1123 11:01:40.628885 1816435 cri.go:89] found id: "ed3c0f3efa1a7cbf7838a8dd0d6c68bea120cbc9adee3f0f4c366f6af82b718a"
	I1123 11:01:40.628906 1816435 cri.go:89] found id: "0834348ab3920104bae33d60d53b6fa926c7d9cdc7c9c7dc945181467dc0a7d1"
	I1123 11:01:40.628931 1816435 cri.go:89] found id: "096ff5530e7ceeaedaaf18ad87cf66bcaea7eb9f9bdbccd13818660c20070473"
	I1123 11:01:40.628948 1816435 cri.go:89] found id: ""
	I1123 11:01:40.629026 1816435 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 11:01:40.660773 1816435 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7","pid":825,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7/rootfs","created":"2025-11-23T11:01:40.441594659Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-268828_fc932fa3859a67e46de8ca75a8dabfc8","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-268828","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fc932fa3859a67e46de8ca75a8dabfc8"},"owner":"root"},{"ociVersion":"1.2.1","id":"9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b","io.kubernetes.cri.sandbox-l
og-directory":"/var/log/pods/kube-system_etcd-newest-cni-268828_52f3dd0ac01467ef7acdb026602f01ce","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-268828","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"52f3dd0ac01467ef7acdb026602f01ce"},"owner":"root"},{"ociVersion":"1.2.1","id":"e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17","pid":897,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17/rootfs","created":"2025-11-23T11:01:40.5089831Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2
56","io.kubernetes.cri.sandbox-id":"e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-268828_6b45b66204df9fbb9e1ee3e76da07f0b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-268828","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6b45b66204df9fbb9e1ee3e76da07f0b"},"owner":"root"}]
	I1123 11:01:40.660953 1816435 cri.go:126] list returned 3 containers
	I1123 11:01:40.660983 1816435 cri.go:129] container: {ID:9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7 Status:running}
	I1123 11:01:40.661036 1816435 cri.go:131] skipping 9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7 - not in ps
	I1123 11:01:40.661067 1816435 cri.go:129] container: {ID:9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b Status:stopped}
	I1123 11:01:40.661091 1816435 cri.go:131] skipping 9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b - not in ps
	I1123 11:01:40.661109 1816435 cri.go:129] container: {ID:e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17 Status:running}
	I1123 11:01:40.661127 1816435 cri.go:131] skipping e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17 - not in ps
	I1123 11:01:40.661210 1816435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:01:40.672042 1816435 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:01:40.672101 1816435 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:01:40.672194 1816435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:01:40.693380 1816435 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:01:40.694093 1816435 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-268828" does not appear in /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 11:01:40.694427 1816435 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-1582671/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-268828" cluster setting kubeconfig missing "newest-cni-268828" context setting]
	I1123 11:01:40.694983 1816435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:01:40.697056 1816435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:01:40.719338 1816435 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 11:01:40.719421 1816435 kubeadm.go:602] duration metric: took 47.291204ms to restartPrimaryControlPlane
	I1123 11:01:40.719446 1816435 kubeadm.go:403] duration metric: took 127.165668ms to StartCluster
	I1123 11:01:40.719486 1816435 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:01:40.719578 1816435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 11:01:40.720550 1816435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:01:40.720811 1816435 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 11:01:40.721206 1816435 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:01:40.721276 1816435 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-268828"
	I1123 11:01:40.721289 1816435 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-268828"
	W1123 11:01:40.721295 1816435 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:01:40.721315 1816435 host.go:66] Checking if "newest-cni-268828" exists ...
	I1123 11:01:40.721767 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.722233 1816435 config.go:182] Loaded profile config "newest-cni-268828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 11:01:40.722366 1816435 addons.go:70] Setting metrics-server=true in profile "newest-cni-268828"
	I1123 11:01:40.722398 1816435 addons.go:239] Setting addon metrics-server=true in "newest-cni-268828"
	W1123 11:01:40.722435 1816435 addons.go:248] addon metrics-server should already be in state true
	I1123 11:01:40.722470 1816435 host.go:66] Checking if "newest-cni-268828" exists ...
	I1123 11:01:40.722952 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.725612 1816435 addons.go:70] Setting dashboard=true in profile "newest-cni-268828"
	I1123 11:01:40.725643 1816435 addons.go:239] Setting addon dashboard=true in "newest-cni-268828"
	W1123 11:01:40.725650 1816435 addons.go:248] addon dashboard should already be in state true
	I1123 11:01:40.725673 1816435 host.go:66] Checking if "newest-cni-268828" exists ...
	I1123 11:01:40.726164 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.727994 1816435 out.go:179] * Verifying Kubernetes components...
	I1123 11:01:40.728133 1816435 addons.go:70] Setting default-storageclass=true in profile "newest-cni-268828"
	I1123 11:01:40.728146 1816435 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-268828"
	I1123 11:01:40.728402 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.733297 1816435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:01:40.790099 1816435 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:01:40.792939 1816435 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:01:40.792959 1816435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:01:40.793025 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:40.802452 1816435 addons.go:239] Setting addon default-storageclass=true in "newest-cni-268828"
	W1123 11:01:40.802472 1816435 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:01:40.802498 1816435 host.go:66] Checking if "newest-cni-268828" exists ...
	I1123 11:01:40.802902 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.813354 1816435 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:01:40.816631 1816435 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:01:40.825634 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:01:40.825667 1816435 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:01:40.825734 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:40.832978 1816435 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 11:01:40.836634 1816435 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 11:01:40.836661 1816435 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 11:01:40.836742 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:40.871009 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:40.883826 1816435 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:01:40.883848 1816435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:01:40.883910 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:40.888245 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:40.916380 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:40.921363 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:41.100867 1816435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:01:41.309093 1816435 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:01:41.309167 1816435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:01:41.434662 1816435 api_server.go:72] duration metric: took 713.775645ms to wait for apiserver process to appear ...
	I1123 11:01:41.434689 1816435 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:01:41.434708 1816435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:01:41.441519 1816435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:01:41.451465 1816435 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 11:01:41.451488 1816435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 11:01:41.497306 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:01:41.497330 1816435 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:01:41.537459 1816435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:01:41.582202 1816435 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 11:01:41.582226 1816435 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 11:01:41.638358 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:01:41.638384 1816435 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:01:41.734831 1816435 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 11:01:41.734856 1816435 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 11:01:41.765418 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:01:41.765444 1816435 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:01:41.780541 1816435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 11:01:41.902691 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:01:41.902720 1816435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:01:42.036738 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:01:42.036765 1816435 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:01:42.133168 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:01:42.133199 1816435 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:01:42.279558 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:01:42.279587 1816435 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:01:42.346197 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:01:42.346223 1816435 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:01:42.381649 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:01:42.381682 1816435 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:01:42.408745 1816435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e189ce28e5135       1611cd07b61d5       7 seconds ago        Running             busybox                   0                   dffe386cbb7a1       busybox                                                default
	f22578041e154       ba04bb24b9575       12 seconds ago       Running             storage-provisioner       0                   a71725ffc52ad       storage-provisioner                                    kube-system
	3fd410f0e2a3e       138784d87c9c5       12 seconds ago       Running             coredns                   0                   d4bd47c2d2c6b       coredns-66bc5c9577-k6bmz                               kube-system
	87c2d96a65070       05baa95f5142d       53 seconds ago       Running             kube-proxy                0                   1bdacc4f63830       kube-proxy-5zfbc                                       kube-system
	358bcf6734bf7       b1a8c6f707935       54 seconds ago       Running             kindnet-cni               0                   fa9979b2ba5b0       kindnet-2wbs5                                          kube-system
	03dabe102ae94       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   0044637999614       kube-scheduler-default-k8s-diff-port-071466            kube-system
	623f8b82e9609       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   a463567617873       kube-controller-manager-default-k8s-diff-port-071466   kube-system
	6ecda63483579       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   a06ba656c7698       kube-apiserver-default-k8s-diff-port-071466            kube-system
	ac530913b339c       a1894772a478e       About a minute ago   Running             etcd                      0                   040a8f7203eb4       etcd-default-k8s-diff-port-071466                      kube-system
	
	
	==> containerd <==
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.747764190Z" level=info msg="Container f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.760548431Z" level=info msg="CreateContainer within sandbox \"d4bd47c2d2c6b401a91eb57dfa82239faf692590c589122dab43c2cc4193f0e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592\""
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.762406091Z" level=info msg="StartContainer for \"3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592\""
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.764112230Z" level=info msg="connecting to shim 3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592" address="unix:///run/containerd/s/2f593413481c888c5a6eba1074941207bbe21b35f4e63729a4369031a10621b8" protocol=ttrpc version=3
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.767292079Z" level=info msg="CreateContainer within sandbox \"a71725ffc52ad275ea9bcbacbd9a99dbc7ab373a3bacda76b171d276a43a0860\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60\""
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.769770187Z" level=info msg="StartContainer for \"f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60\""
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.772755817Z" level=info msg="connecting to shim f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60" address="unix:///run/containerd/s/62d637d39bcef7694c61ae197324ac6e59ccd363192af10e3306d353c7e18dc0" protocol=ttrpc version=3
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.851390005Z" level=info msg="StartContainer for \"3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592\" returns successfully"
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.866281188Z" level=info msg="StartContainer for \"f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60\" returns successfully"
	Nov 23 11:01:37 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:37.191609721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f8b4a07b-84e0-4042-b688-0f75fde332b2,Namespace:default,Attempt:0,}"
	Nov 23 11:01:37 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:37.241347050Z" level=info msg="connecting to shim dffe386cbb7a181ef612be5f63319675f61a1fe631887b0f8aa0c7bad2626711" address="unix:///run/containerd/s/5c814b1bb8e757eb145ce183ae8d9e8cf8715370d46b3ba7f260cb722974d1bd" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 11:01:37 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:37.328809385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f8b4a07b-84e0-4042-b688-0f75fde332b2,Namespace:default,Attempt:0,} returns sandbox id \"dffe386cbb7a181ef612be5f63319675f61a1fe631887b0f8aa0c7bad2626711\""
	Nov 23 11:01:37 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:37.332190050Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.599477995Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.601617091Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.603987484Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.620106861Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.621026616Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.28879464s"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.621076182Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.634997658Z" level=info msg="CreateContainer within sandbox \"dffe386cbb7a181ef612be5f63319675f61a1fe631887b0f8aa0c7bad2626711\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.651742496Z" level=info msg="Container e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.664137184Z" level=info msg="CreateContainer within sandbox \"dffe386cbb7a181ef612be5f63319675f61a1fe631887b0f8aa0c7bad2626711\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c\""
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.668133817Z" level=info msg="StartContainer for \"e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c\""
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.668961783Z" level=info msg="connecting to shim e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c" address="unix:///run/containerd/s/5c814b1bb8e757eb145ce183ae8d9e8cf8715370d46b3ba7f260cb722974d1bd" protocol=ttrpc version=3
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.759888678Z" level=info msg="StartContainer for \"e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c\" returns successfully"
	
	
	==> coredns [3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55268 - 48427 "HINFO IN 4207838737370628221.1857008942795508872. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036125072s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-071466
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-071466
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-071466
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_00_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:00:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-071466
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:01:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:01:34 +0000   Sun, 23 Nov 2025 11:00:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:01:34 +0000   Sun, 23 Nov 2025 11:00:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:01:34 +0000   Sun, 23 Nov 2025 11:00:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:01:34 +0000   Sun, 23 Nov 2025 11:01:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-071466
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                b03f0beb-f5c3-48a0-8808-a8238f689abb
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-k6bmz                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-default-k8s-diff-port-071466                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         59s
	  kube-system                 kindnet-2wbs5                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-default-k8s-diff-port-071466             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-071466    200m (10%)    0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-5zfbc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-default-k8s-diff-port-071466             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 53s                kube-proxy       
	  Normal   NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 70s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s (x8 over 70s)  kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s (x7 over 70s)  kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasSufficientPID
	  Normal   Starting                 70s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 60s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 60s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  60s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  59s                kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    59s                kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     59s                kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           56s                node-controller  Node default-k8s-diff-port-071466 event: Registered Node default-k8s-diff-port-071466 in Controller
	  Normal   NodeReady                13s                kubelet          Node default-k8s-diff-port-071466 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:50] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [ac530913b339c54f8e7f77be8b5c7acf50be231563d7a47e777d7d1ed95bc1cb] <==
	{"level":"warn","ts":"2025-11-23T11:00:41.812784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.826096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.846549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.871687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.885190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.902238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.925095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.938668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.957916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.981801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.998768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.018102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.050181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.070586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.096578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.128537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.148055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.165569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.189257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.231533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.253269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.266627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.290131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.315617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.433515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44902","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:01:48 up 11:44,  0 user,  load average: 5.21, 3.85, 3.16
	Linux default-k8s-diff-port-071466 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [358bcf6734bf791c2383468f755f233c9a10cd0f2f4cc61a94acc69e292996b3] <==
	I1123 11:00:53.927636       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:00:53.927940       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:00:53.928084       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:00:53.928153       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:00:53.928165       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:00:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:00:54.130548       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:00:54.130574       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:00:54.130583       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:00:54.130881       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:01:24.108952       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:01:24.131463       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:01:24.131567       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:01:24.131645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:01:25.430977       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:01:25.431021       1 metrics.go:72] Registering metrics
	I1123 11:01:25.431095       1 controller.go:711] "Syncing nftables rules"
	I1123 11:01:34.115272       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:01:34.115346       1 main.go:301] handling current node
	I1123 11:01:44.110792       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:01:44.110832       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ecda63483579ca7cd61353a106592764baea71552fb8844448159cb7e3ed5ee] <==
	I1123 11:00:43.672821       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:00:43.681376       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 11:00:43.697400       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:00:43.697686       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 11:00:43.718239       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:00:43.745703       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:00:43.886768       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:00:44.170518       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 11:00:44.179946       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 11:00:44.181995       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:00:45.375004       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:00:45.456768       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:00:45.577795       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 11:00:45.596580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 11:00:45.598178       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:00:45.605127       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:00:46.552254       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:00:47.081369       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:00:47.103356       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 11:00:47.130115       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 11:00:52.183903       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:00:52.490708       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:00:52.504597       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:00:52.545777       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 11:01:46.144033       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:48238: use of closed network connection
	
	
	==> kube-controller-manager [623f8b82e960994c44de86650859d7bdc1e47066943c6de9026db352a65cd857] <==
	I1123 11:00:51.616360       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 11:00:51.616387       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 11:00:51.616392       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 11:00:51.616397       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 11:00:51.619305       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:00:51.629626       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 11:00:51.636468       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 11:00:51.636798       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 11:00:51.637713       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:00:51.637877       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 11:00:51.637933       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:00:51.637959       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:00:51.637975       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 11:00:51.637992       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 11:00:51.638117       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 11:00:51.644483       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 11:00:51.644809       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:00:51.644873       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:00:51.644977       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 11:00:51.664796       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-071466" podCIDRs=["10.244.0.0/24"]
	I1123 11:00:51.695809       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:00:51.696042       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:00:51.696127       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:00:51.759405       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:01:36.596194       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [87c2d96a650706cc13ad43c58de9f2a9074d0c00d27858ed68f7da7671714699] <==
	I1123 11:00:54.073784       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:00:54.165725       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:00:54.266661       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:00:54.266702       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 11:00:54.266779       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:00:54.491376       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:00:54.491445       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:00:54.691719       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:00:54.692019       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:00:54.692035       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:00:54.737301       1 config.go:200] "Starting service config controller"
	I1123 11:00:54.737324       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:00:54.737365       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:00:54.737370       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:00:54.737385       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:00:54.737396       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:00:54.738122       1 config.go:309] "Starting node config controller"
	I1123 11:00:54.738142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:00:54.738148       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:00:54.844468       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:00:54.844504       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:00:54.844515       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [03dabe102ae9498275cd045ad7cb86b8bc606c065986b7682a76a3ce379b7780] <==
	E1123 11:00:43.959801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:00:43.979759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 11:00:43.979981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 11:00:43.980425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 11:00:43.980538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 11:00:43.980889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:00:43.980976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:00:43.996076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 11:00:43.996168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 11:00:43.996205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 11:00:43.996242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 11:00:43.996332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:00:43.996382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 11:00:43.996428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 11:00:43.996497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 11:00:43.996532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:00:43.996581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:00:43.998702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:00:44.005245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 11:00:44.817708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:00:44.833240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 11:00:44.859442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 11:00:44.878583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:00:44.889189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1123 11:00:47.351476       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:00:48 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:48.430586    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb1645a744afee3f34fa1455e112e50b-flexvolume-dir\") pod \"kube-controller-manager-default-k8s-diff-port-071466\" (UID: \"fb1645a744afee3f34fa1455e112e50b\") " pod="kube-system/kube-controller-manager-default-k8s-diff-port-071466"
	Nov 23 11:00:48 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:48.430673    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb1645a744afee3f34fa1455e112e50b-kubeconfig\") pod \"kube-controller-manager-default-k8s-diff-port-071466\" (UID: \"fb1645a744afee3f34fa1455e112e50b\") " pod="kube-system/kube-controller-manager-default-k8s-diff-port-071466"
	Nov 23 11:00:48 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:48.430771    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f340aa4fdadfe900f6b5d328741c3a78-kubeconfig\") pod \"kube-scheduler-default-k8s-diff-port-071466\" (UID: \"f340aa4fdadfe900f6b5d328741c3a78\") " pod="kube-system/kube-scheduler-default-k8s-diff-port-071466"
	Nov 23 11:00:48 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:48.636873    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-071466" podStartSLOduration=2.636853774 podStartE2EDuration="2.636853774s" podCreationTimestamp="2025-11-23 11:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:00:48.42474254 +0000 UTC m=+1.401754468" watchObservedRunningTime="2025-11-23 11:00:48.636853774 +0000 UTC m=+1.613865710"
	Nov 23 11:00:51 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:51.714670    1491 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 11:00:51 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:51.715825    1491 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.684368    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a1f31cd-7028-4474-ae74-b50d5307009a-lib-modules\") pod \"kindnet-2wbs5\" (UID: \"5a1f31cd-7028-4474-ae74-b50d5307009a\") " pod="kube-system/kindnet-2wbs5"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.684405    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5a1f31cd-7028-4474-ae74-b50d5307009a-cni-cfg\") pod \"kindnet-2wbs5\" (UID: \"5a1f31cd-7028-4474-ae74-b50d5307009a\") " pod="kube-system/kindnet-2wbs5"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.684450    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68rl\" (UniqueName: \"kubernetes.io/projected/5a1f31cd-7028-4474-ae74-b50d5307009a-kube-api-access-x68rl\") pod \"kindnet-2wbs5\" (UID: \"5a1f31cd-7028-4474-ae74-b50d5307009a\") " pod="kube-system/kindnet-2wbs5"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.684472    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a1f31cd-7028-4474-ae74-b50d5307009a-xtables-lock\") pod \"kindnet-2wbs5\" (UID: \"5a1f31cd-7028-4474-ae74-b50d5307009a\") " pod="kube-system/kindnet-2wbs5"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.824837    1491 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.887465    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce0d571b-d10b-446a-8824-44e1566eb31f-xtables-lock\") pod \"kube-proxy-5zfbc\" (UID: \"ce0d571b-d10b-446a-8824-44e1566eb31f\") " pod="kube-system/kube-proxy-5zfbc"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.887505    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce0d571b-d10b-446a-8824-44e1566eb31f-lib-modules\") pod \"kube-proxy-5zfbc\" (UID: \"ce0d571b-d10b-446a-8824-44e1566eb31f\") " pod="kube-system/kube-proxy-5zfbc"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.887530    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klqm2\" (UniqueName: \"kubernetes.io/projected/ce0d571b-d10b-446a-8824-44e1566eb31f-kube-api-access-klqm2\") pod \"kube-proxy-5zfbc\" (UID: \"ce0d571b-d10b-446a-8824-44e1566eb31f\") " pod="kube-system/kube-proxy-5zfbc"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.887558    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce0d571b-d10b-446a-8824-44e1566eb31f-kube-proxy\") pod \"kube-proxy-5zfbc\" (UID: \"ce0d571b-d10b-446a-8824-44e1566eb31f\") " pod="kube-system/kube-proxy-5zfbc"
	Nov 23 11:00:54 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:54.814032    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5zfbc" podStartSLOduration=2.814011322 podStartE2EDuration="2.814011322s" podCreationTimestamp="2025-11-23 11:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:00:54.812142126 +0000 UTC m=+7.789154078" watchObservedRunningTime="2025-11-23 11:00:54.814011322 +0000 UTC m=+7.791023258"
	Nov 23 11:00:56 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:56.022776    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2wbs5" podStartSLOduration=4.02272623 podStartE2EDuration="4.02272623s" podCreationTimestamp="2025-11-23 11:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:00:54.84430742 +0000 UTC m=+7.821319381" watchObservedRunningTime="2025-11-23 11:00:56.02272623 +0000 UTC m=+8.999738232"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.195108    1491 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.345752    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdl2g\" (UniqueName: \"kubernetes.io/projected/44dabc1e-0b98-4250-861a-5992ede34070-kube-api-access-wdl2g\") pod \"coredns-66bc5c9577-k6bmz\" (UID: \"44dabc1e-0b98-4250-861a-5992ede34070\") " pod="kube-system/coredns-66bc5c9577-k6bmz"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.345811    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44dabc1e-0b98-4250-861a-5992ede34070-config-volume\") pod \"coredns-66bc5c9577-k6bmz\" (UID: \"44dabc1e-0b98-4250-861a-5992ede34070\") " pod="kube-system/coredns-66bc5c9577-k6bmz"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.345836    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/152263f3-c362-4790-a756-3d028b31e04a-tmp\") pod \"storage-provisioner\" (UID: \"152263f3-c362-4790-a756-3d028b31e04a\") " pod="kube-system/storage-provisioner"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.345853    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swkrj\" (UniqueName: \"kubernetes.io/projected/152263f3-c362-4790-a756-3d028b31e04a-kube-api-access-swkrj\") pod \"storage-provisioner\" (UID: \"152263f3-c362-4790-a756-3d028b31e04a\") " pod="kube-system/storage-provisioner"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.925794    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k6bmz" podStartSLOduration=42.925774749 podStartE2EDuration="42.925774749s" podCreationTimestamp="2025-11-23 11:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:01:34.906133977 +0000 UTC m=+47.883145921" watchObservedRunningTime="2025-11-23 11:01:34.925774749 +0000 UTC m=+47.902786685"
	Nov 23 11:01:36 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:36.878917    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.878886156 podStartE2EDuration="42.878886156s" podCreationTimestamp="2025-11-23 11:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:01:34.949855205 +0000 UTC m=+47.926867132" watchObservedRunningTime="2025-11-23 11:01:36.878886156 +0000 UTC m=+49.855898092"
	Nov 23 11:01:36 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:36.964437    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drbc\" (UniqueName: \"kubernetes.io/projected/f8b4a07b-84e0-4042-b688-0f75fde332b2-kube-api-access-8drbc\") pod \"busybox\" (UID: \"f8b4a07b-84e0-4042-b688-0f75fde332b2\") " pod="default/busybox"
	
	
	==> storage-provisioner [f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60] <==
	I1123 11:01:34.907885       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 11:01:34.957971       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 11:01:34.958309       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:01:34.960894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:34.967919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:01:34.968194       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:01:34.968508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071466_d1f06908-43c5-4138-b382-3c1f6f829bd0!
	I1123 11:01:34.973642       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6687691-e179-4c4e-b02b-d913321bfbad", APIVersion:"v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-071466_d1f06908-43c5-4138-b382-3c1f6f829bd0 became leader
	W1123 11:01:34.977206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:34.984738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:01:35.069512       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071466_d1f06908-43c5-4138-b382-3c1f6f829bd0!
	W1123 11:01:36.988676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:36.995994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:38.999548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:39.006171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:41.012425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:41.017883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:43.021608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:43.032002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:45.057309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:45.072633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:47.076605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:47.086221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-071466 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-071466
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-071466:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a",
	        "Created": "2025-11-23T11:00:19.52400508Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1809898,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T11:00:19.581807861Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a/hostname",
	        "HostsPath": "/var/lib/docker/containers/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a/hosts",
	        "LogPath": "/var/lib/docker/containers/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a/c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a-json.log",
	        "Name": "/default-k8s-diff-port-071466",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-071466:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-071466",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c233f0259bfdaf563bcee4975cd5231a76172a36bb2d027566d26760d7712d6a",
	                "LowerDir": "/var/lib/docker/overlay2/0bd94846ae3639b9505a2696918cdb1c3c3d22c2aac69987a1be43a2c988740a-init/diff:/var/lib/docker/overlay2/fe0bef51c968206096993e9a75db2143cd9cd74d56696a257291ce63f851a2d4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bd94846ae3639b9505a2696918cdb1c3c3d22c2aac69987a1be43a2c988740a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bd94846ae3639b9505a2696918cdb1c3c3d22c2aac69987a1be43a2c988740a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bd94846ae3639b9505a2696918cdb1c3c3d22c2aac69987a1be43a2c988740a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-071466",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-071466/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-071466",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-071466",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-071466",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "73ffca0e5ed3db4979fd483935711dc0dc0f7eb3edd65d044115e687b59538d1",
	            "SandboxKey": "/var/run/docker/netns/73ffca0e5ed3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35284"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35285"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35288"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35286"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "35287"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-071466": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5e:50:42:3c:8c:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c99e598dbc50acc2b850b8fed135c2980d7494ab20f2df75ff9827da7f784687",
	                    "EndpointID": "e8e283486275522c1a16758d954b937539b4ddd8e6f1a5003fa968d8d5d87bd6",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-071466",
	                        "c233f0259bfd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-071466 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-071466 logs -n 25: (1.600066785s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p no-preload-055571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 10:58 UTC │ 23 Nov 25 10:58 UTC │
	│ stop    │ -p no-preload-055571 --alsologtostderr -v=3                                                                                                                                                                                                         │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 10:58 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable dashboard -p no-preload-055571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ start   │ -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable metrics-server -p embed-certs-969029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ stop    │ -p embed-certs-969029 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ addons  │ enable dashboard -p embed-certs-969029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 10:59 UTC │
	│ start   │ -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 10:59 UTC │ 23 Nov 25 11:00 UTC │
	│ image   │ no-preload-055571 image list --format=json                                                                                                                                                                                                          │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ pause   │ -p no-preload-055571 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ unpause │ -p no-preload-055571 --alsologtostderr -v=1                                                                                                                                                                                                         │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p no-preload-055571                                                                                                                                                                                                                                │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p no-preload-055571                                                                                                                                                                                                                                │ no-preload-055571            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p disable-driver-mounts-436374                                                                                                                                                                                                                     │ disable-driver-mounts-436374 │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ start   │ -p default-k8s-diff-port-071466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-071466 │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:01 UTC │
	│ image   │ embed-certs-969029 image list --format=json                                                                                                                                                                                                         │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ pause   │ -p embed-certs-969029 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ unpause │ -p embed-certs-969029 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p embed-certs-969029                                                                                                                                                                                                                               │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ delete  │ -p embed-certs-969029                                                                                                                                                                                                                               │ embed-certs-969029           │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:00 UTC │
	│ start   │ -p newest-cni-268828 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:00 UTC │ 23 Nov 25 11:01 UTC │
	│ addons  │ enable metrics-server -p newest-cni-268828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:01 UTC │ 23 Nov 25 11:01 UTC │
	│ stop    │ -p newest-cni-268828 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:01 UTC │ 23 Nov 25 11:01 UTC │
	│ addons  │ enable dashboard -p newest-cni-268828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:01 UTC │ 23 Nov 25 11:01 UTC │
	│ start   │ -p newest-cni-268828 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-268828            │ jenkins │ v1.37.0 │ 23 Nov 25 11:01 UTC │ 23 Nov 25 11:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 11:01:33
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 11:01:33.311891 1816435 out.go:360] Setting OutFile to fd 1 ...
	I1123 11:01:33.312062 1816435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:01:33.312093 1816435 out.go:374] Setting ErrFile to fd 2...
	I1123 11:01:33.312112 1816435 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 11:01:33.312406 1816435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 11:01:33.312929 1816435 out.go:368] Setting JSON to false
	I1123 11:01:33.313971 1816435 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":42239,"bootTime":1763853455,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 11:01:33.314068 1816435 start.go:143] virtualization:  
	I1123 11:01:33.317425 1816435 out.go:179] * [newest-cni-268828] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 11:01:33.321143 1816435 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 11:01:33.321286 1816435 notify.go:221] Checking for updates...
	I1123 11:01:33.326775 1816435 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 11:01:33.329643 1816435 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 11:01:33.332481 1816435 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 11:01:33.335518 1816435 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 11:01:33.338438 1816435 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W1123 11:01:29.443397 1809512 node_ready.go:57] node "default-k8s-diff-port-071466" has "Ready":"False" status (will retry)
	W1123 11:01:31.942936 1809512 node_ready.go:57] node "default-k8s-diff-port-071466" has "Ready":"False" status (will retry)
	I1123 11:01:33.341829 1816435 config.go:182] Loaded profile config "newest-cni-268828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 11:01:33.342487 1816435 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 11:01:33.377137 1816435 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 11:01:33.377264 1816435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:01:33.434711 1816435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:01:33.424867464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:01:33.434866 1816435 docker.go:319] overlay module found
	I1123 11:01:33.438035 1816435 out.go:179] * Using the docker driver based on existing profile
	I1123 11:01:33.440866 1816435 start.go:309] selected driver: docker
	I1123 11:01:33.440899 1816435 start.go:927] validating driver "docker" against &{Name:newest-cni-268828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:01:33.441143 1816435 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 11:01:33.441825 1816435 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 11:01:33.502236 1816435 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 11:01:33.492896941 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 11:01:33.502580 1816435 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:01:33.502614 1816435 cni.go:84] Creating CNI manager for ""
	I1123 11:01:33.502678 1816435 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 11:01:33.502721 1816435 start.go:353] cluster config:
	{Name:newest-cni-268828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:01:33.505842 1816435 out.go:179] * Starting "newest-cni-268828" primary control-plane node in "newest-cni-268828" cluster
	I1123 11:01:33.508605 1816435 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 11:01:33.511498 1816435 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 11:01:33.514280 1816435 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 11:01:33.514245 1816435 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 11:01:33.514344 1816435 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 11:01:33.514355 1816435 cache.go:65] Caching tarball of preloaded images
	I1123 11:01:33.514434 1816435 preload.go:238] Found /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 11:01:33.514445 1816435 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 11:01:33.514563 1816435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/config.json ...
	I1123 11:01:33.536258 1816435 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 11:01:33.536283 1816435 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 11:01:33.536304 1816435 cache.go:243] Successfully downloaded all kic artifacts
	I1123 11:01:33.536335 1816435 start.go:360] acquireMachinesLock for newest-cni-268828: {Name:mk6fb61bd7d279f886e7ed4e66b2ff775ec57a78 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 11:01:33.536408 1816435 start.go:364] duration metric: took 45.045µs to acquireMachinesLock for "newest-cni-268828"
	I1123 11:01:33.536432 1816435 start.go:96] Skipping create...Using existing machine configuration
	I1123 11:01:33.536444 1816435 fix.go:54] fixHost starting: 
	I1123 11:01:33.536715 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:33.554152 1816435 fix.go:112] recreateIfNeeded on newest-cni-268828: state=Stopped err=<nil>
	W1123 11:01:33.554181 1816435 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 11:01:33.943071 1809512 node_ready.go:57] node "default-k8s-diff-port-071466" has "Ready":"False" status (will retry)
	I1123 11:01:34.442855 1809512 node_ready.go:49] node "default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:34.442889 1809512 node_ready.go:38] duration metric: took 40.002980398s for node "default-k8s-diff-port-071466" to be "Ready" ...
	I1123 11:01:34.442904 1809512 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:01:34.442975 1809512 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:01:34.457080 1809512 api_server.go:72] duration metric: took 41.836360272s to wait for apiserver process to appear ...
	I1123 11:01:34.457105 1809512 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:01:34.457124 1809512 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 11:01:34.477066 1809512 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 11:01:34.478150 1809512 api_server.go:141] control plane version: v1.34.1
	I1123 11:01:34.478175 1809512 api_server.go:131] duration metric: took 21.064441ms to wait for apiserver health ...
	I1123 11:01:34.478184 1809512 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:01:34.481548 1809512 system_pods.go:59] 8 kube-system pods found
	I1123 11:01:34.481642 1809512 system_pods.go:61] "coredns-66bc5c9577-k6bmz" [44dabc1e-0b98-4250-861a-5992ede34070] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:01:34.481665 1809512 system_pods.go:61] "etcd-default-k8s-diff-port-071466" [c7f6bb44-f6a0-400b-8ef2-57e9bfa53d69] Running
	I1123 11:01:34.481701 1809512 system_pods.go:61] "kindnet-2wbs5" [5a1f31cd-7028-4474-ae74-b50d5307009a] Running
	I1123 11:01:34.481727 1809512 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-071466" [718a0115-63c0-4917-a905-077d8428220c] Running
	I1123 11:01:34.481745 1809512 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-071466" [aa29c3d0-a2b7-4c4c-a342-d4162fc5ac23] Running
	I1123 11:01:34.481780 1809512 system_pods.go:61] "kube-proxy-5zfbc" [ce0d571b-d10b-446a-8824-44e1566eb31f] Running
	I1123 11:01:34.481798 1809512 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-071466" [d08fc5a3-8d83-4529-89c5-241765de3656] Running
	I1123 11:01:34.481829 1809512 system_pods.go:61] "storage-provisioner" [152263f3-c362-4790-a756-3d028b31e04a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:01:34.481859 1809512 system_pods.go:74] duration metric: took 3.668428ms to wait for pod list to return data ...
	I1123 11:01:34.481882 1809512 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:01:34.487142 1809512 default_sa.go:45] found service account: "default"
	I1123 11:01:34.487223 1809512 default_sa.go:55] duration metric: took 5.321596ms for default service account to be created ...
	I1123 11:01:34.487248 1809512 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 11:01:34.492109 1809512 system_pods.go:86] 8 kube-system pods found
	I1123 11:01:34.492143 1809512 system_pods.go:89] "coredns-66bc5c9577-k6bmz" [44dabc1e-0b98-4250-861a-5992ede34070] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:01:34.492150 1809512 system_pods.go:89] "etcd-default-k8s-diff-port-071466" [c7f6bb44-f6a0-400b-8ef2-57e9bfa53d69] Running
	I1123 11:01:34.492157 1809512 system_pods.go:89] "kindnet-2wbs5" [5a1f31cd-7028-4474-ae74-b50d5307009a] Running
	I1123 11:01:34.492161 1809512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071466" [718a0115-63c0-4917-a905-077d8428220c] Running
	I1123 11:01:34.492166 1809512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071466" [aa29c3d0-a2b7-4c4c-a342-d4162fc5ac23] Running
	I1123 11:01:34.492170 1809512 system_pods.go:89] "kube-proxy-5zfbc" [ce0d571b-d10b-446a-8824-44e1566eb31f] Running
	I1123 11:01:34.492175 1809512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071466" [d08fc5a3-8d83-4529-89c5-241765de3656] Running
	I1123 11:01:34.492181 1809512 system_pods.go:89] "storage-provisioner" [152263f3-c362-4790-a756-3d028b31e04a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:01:34.492203 1809512 retry.go:31] will retry after 238.611806ms: missing components: kube-dns
	I1123 11:01:34.735870 1809512 system_pods.go:86] 8 kube-system pods found
	I1123 11:01:34.735953 1809512 system_pods.go:89] "coredns-66bc5c9577-k6bmz" [44dabc1e-0b98-4250-861a-5992ede34070] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 11:01:34.735977 1809512 system_pods.go:89] "etcd-default-k8s-diff-port-071466" [c7f6bb44-f6a0-400b-8ef2-57e9bfa53d69] Running
	I1123 11:01:34.735999 1809512 system_pods.go:89] "kindnet-2wbs5" [5a1f31cd-7028-4474-ae74-b50d5307009a] Running
	I1123 11:01:34.736018 1809512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071466" [718a0115-63c0-4917-a905-077d8428220c] Running
	I1123 11:01:34.736046 1809512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071466" [aa29c3d0-a2b7-4c4c-a342-d4162fc5ac23] Running
	I1123 11:01:34.736066 1809512 system_pods.go:89] "kube-proxy-5zfbc" [ce0d571b-d10b-446a-8824-44e1566eb31f] Running
	I1123 11:01:34.736086 1809512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071466" [d08fc5a3-8d83-4529-89c5-241765de3656] Running
	I1123 11:01:34.736105 1809512 system_pods.go:89] "storage-provisioner" [152263f3-c362-4790-a756-3d028b31e04a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 11:01:34.736153 1809512 retry.go:31] will retry after 240.954117ms: missing components: kube-dns
	I1123 11:01:34.982245 1809512 system_pods.go:86] 8 kube-system pods found
	I1123 11:01:34.982280 1809512 system_pods.go:89] "coredns-66bc5c9577-k6bmz" [44dabc1e-0b98-4250-861a-5992ede34070] Running
	I1123 11:01:34.982288 1809512 system_pods.go:89] "etcd-default-k8s-diff-port-071466" [c7f6bb44-f6a0-400b-8ef2-57e9bfa53d69] Running
	I1123 11:01:34.982295 1809512 system_pods.go:89] "kindnet-2wbs5" [5a1f31cd-7028-4474-ae74-b50d5307009a] Running
	I1123 11:01:34.982299 1809512 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-071466" [718a0115-63c0-4917-a905-077d8428220c] Running
	I1123 11:01:34.982304 1809512 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-071466" [aa29c3d0-a2b7-4c4c-a342-d4162fc5ac23] Running
	I1123 11:01:34.982308 1809512 system_pods.go:89] "kube-proxy-5zfbc" [ce0d571b-d10b-446a-8824-44e1566eb31f] Running
	I1123 11:01:34.982312 1809512 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-071466" [d08fc5a3-8d83-4529-89c5-241765de3656] Running
	I1123 11:01:34.982316 1809512 system_pods.go:89] "storage-provisioner" [152263f3-c362-4790-a756-3d028b31e04a] Running
	I1123 11:01:34.982324 1809512 system_pods.go:126] duration metric: took 495.058934ms to wait for k8s-apps to be running ...
	I1123 11:01:34.982335 1809512 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 11:01:34.982398 1809512 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 11:01:34.997191 1809512 system_svc.go:56] duration metric: took 14.846942ms WaitForService to wait for kubelet
	I1123 11:01:34.997225 1809512 kubeadm.go:587] duration metric: took 42.376509466s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 11:01:34.997256 1809512 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:01:35.000490 1809512 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:01:35.000532 1809512 node_conditions.go:123] node cpu capacity is 2
	I1123 11:01:35.000546 1809512 node_conditions.go:105] duration metric: took 3.284815ms to run NodePressure ...
	I1123 11:01:35.000559 1809512 start.go:242] waiting for startup goroutines ...
	I1123 11:01:35.000567 1809512 start.go:247] waiting for cluster config update ...
	I1123 11:01:35.000580 1809512 start.go:256] writing updated cluster config ...
	I1123 11:01:35.000940 1809512 ssh_runner.go:195] Run: rm -f paused
	I1123 11:01:35.006421 1809512 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:01:35.011726 1809512 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k6bmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.017520 1809512 pod_ready.go:94] pod "coredns-66bc5c9577-k6bmz" is "Ready"
	I1123 11:01:35.017550 1809512 pod_ready.go:86] duration metric: took 5.791394ms for pod "coredns-66bc5c9577-k6bmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.030638 1809512 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.036565 1809512 pod_ready.go:94] pod "etcd-default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:35.036592 1809512 pod_ready.go:86] duration metric: took 5.927431ms for pod "etcd-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.039543 1809512 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.044766 1809512 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:35.044841 1809512 pod_ready.go:86] duration metric: took 5.225484ms for pod "kube-apiserver-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.047859 1809512 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.412634 1809512 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:35.412709 1809512 pod_ready.go:86] duration metric: took 364.784645ms for pod "kube-controller-manager-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:35.616949 1809512 pod_ready.go:83] waiting for pod "kube-proxy-5zfbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:36.012895 1809512 pod_ready.go:94] pod "kube-proxy-5zfbc" is "Ready"
	I1123 11:01:36.012925 1809512 pod_ready.go:86] duration metric: took 395.901246ms for pod "kube-proxy-5zfbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:36.210969 1809512 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:36.615932 1809512 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-071466" is "Ready"
	I1123 11:01:36.616006 1809512 pod_ready.go:86] duration metric: took 405.011338ms for pod "kube-scheduler-default-k8s-diff-port-071466" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 11:01:36.616033 1809512 pod_ready.go:40] duration metric: took 1.609556549s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 11:01:36.671404 1809512 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:01:36.674879 1809512 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-071466" cluster and "default" namespace by default
	I1123 11:01:33.557482 1816435 out.go:252] * Restarting existing docker container for "newest-cni-268828" ...
	I1123 11:01:33.557570 1816435 cli_runner.go:164] Run: docker start newest-cni-268828
	I1123 11:01:33.841633 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:33.866430 1816435 kic.go:430] container "newest-cni-268828" state is running.
	I1123 11:01:33.866795 1816435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-268828
	I1123 11:01:33.890875 1816435 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/config.json ...
	I1123 11:01:33.891106 1816435 machine.go:94] provisionDockerMachine start ...
	I1123 11:01:33.891166 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:33.911488 1816435 main.go:143] libmachine: Using SSH client type: native
	I1123 11:01:33.911813 1816435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35294 <nil> <nil>}
	I1123 11:01:33.911822 1816435 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 11:01:33.912596 1816435 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 11:01:37.075016 1816435 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-268828
	
	I1123 11:01:37.075045 1816435 ubuntu.go:182] provisioning hostname "newest-cni-268828"
	I1123 11:01:37.075108 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.094156 1816435 main.go:143] libmachine: Using SSH client type: native
	I1123 11:01:37.094461 1816435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35294 <nil> <nil>}
	I1123 11:01:37.094475 1816435 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-268828 && echo "newest-cni-268828" | sudo tee /etc/hostname
	I1123 11:01:37.261972 1816435 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-268828
	
	I1123 11:01:37.262053 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.288040 1816435 main.go:143] libmachine: Using SSH client type: native
	I1123 11:01:37.288392 1816435 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 35294 <nil> <nil>}
	I1123 11:01:37.288415 1816435 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-268828' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-268828/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-268828' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 11:01:37.443255 1816435 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 11:01:37.443281 1816435 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21968-1582671/.minikube CaCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21968-1582671/.minikube}
	I1123 11:01:37.443304 1816435 ubuntu.go:190] setting up certificates
	I1123 11:01:37.443319 1816435 provision.go:84] configureAuth start
	I1123 11:01:37.443379 1816435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-268828
	I1123 11:01:37.461370 1816435 provision.go:143] copyHostCerts
	I1123 11:01:37.461442 1816435 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem, removing ...
	I1123 11:01:37.461461 1816435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem
	I1123 11:01:37.461543 1816435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.pem (1078 bytes)
	I1123 11:01:37.461642 1816435 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem, removing ...
	I1123 11:01:37.461651 1816435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem
	I1123 11:01:37.461678 1816435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/cert.pem (1123 bytes)
	I1123 11:01:37.461739 1816435 exec_runner.go:144] found /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem, removing ...
	I1123 11:01:37.461747 1816435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem
	I1123 11:01:37.461772 1816435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21968-1582671/.minikube/key.pem (1675 bytes)
	I1123 11:01:37.461821 1816435 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem org=jenkins.newest-cni-268828 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-268828]
	I1123 11:01:37.526677 1816435 provision.go:177] copyRemoteCerts
	I1123 11:01:37.526741 1816435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 11:01:37.526825 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.543571 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:37.650790 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 11:01:37.669226 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 11:01:37.686635 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 11:01:37.705140 1816435 provision.go:87] duration metric: took 261.778989ms to configureAuth
	I1123 11:01:37.705217 1816435 ubuntu.go:206] setting minikube options for container-runtime
	I1123 11:01:37.705476 1816435 config.go:182] Loaded profile config "newest-cni-268828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 11:01:37.705506 1816435 machine.go:97] duration metric: took 3.814390678s to provisionDockerMachine
	I1123 11:01:37.705536 1816435 start.go:293] postStartSetup for "newest-cni-268828" (driver="docker")
	I1123 11:01:37.705559 1816435 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 11:01:37.705635 1816435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 11:01:37.705706 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.723642 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:37.826998 1816435 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 11:01:37.830376 1816435 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 11:01:37.830408 1816435 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 11:01:37.830425 1816435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/addons for local assets ...
	I1123 11:01:37.830480 1816435 filesync.go:126] Scanning /home/jenkins/minikube-integration/21968-1582671/.minikube/files for local assets ...
	I1123 11:01:37.830557 1816435 filesync.go:149] local asset: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem -> 15845322.pem in /etc/ssl/certs
	I1123 11:01:37.830661 1816435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 11:01:37.838288 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 11:01:37.857108 1816435 start.go:296] duration metric: took 151.542897ms for postStartSetup
	I1123 11:01:37.857230 1816435 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 11:01:37.857304 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.875768 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:37.975988 1816435 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 11:01:37.980793 1816435 fix.go:56] duration metric: took 4.444342897s for fixHost
	I1123 11:01:37.980817 1816435 start.go:83] releasing machines lock for "newest-cni-268828", held for 4.444396344s
	I1123 11:01:37.980888 1816435 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-268828
	I1123 11:01:37.999505 1816435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 11:01:37.999673 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:37.999875 1816435 ssh_runner.go:195] Run: cat /version.json
	I1123 11:01:37.999914 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:38.039003 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:38.046071 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:38.146855 1816435 ssh_runner.go:195] Run: systemctl --version
	I1123 11:01:38.243039 1816435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 11:01:38.247701 1816435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 11:01:38.247778 1816435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 11:01:38.255492 1816435 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 11:01:38.255527 1816435 start.go:496] detecting cgroup driver to use...
	I1123 11:01:38.255558 1816435 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 11:01:38.255607 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 11:01:38.273649 1816435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 11:01:38.287140 1816435 docker.go:218] disabling cri-docker service (if available) ...
	I1123 11:01:38.287252 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 11:01:38.311748 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 11:01:38.331314 1816435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 11:01:38.458173 1816435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 11:01:38.573731 1816435 docker.go:234] disabling docker service ...
	I1123 11:01:38.573792 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 11:01:38.588674 1816435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 11:01:38.601633 1816435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 11:01:38.726157 1816435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 11:01:38.850384 1816435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 11:01:38.863370 1816435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 11:01:38.879007 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 11:01:38.891891 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 11:01:38.901962 1816435 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 11:01:38.902078 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 11:01:38.910984 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 11:01:38.919868 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 11:01:38.929246 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 11:01:38.938639 1816435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 11:01:38.947089 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 11:01:38.956207 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 11:01:38.965309 1816435 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 11:01:38.974859 1816435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 11:01:38.982715 1816435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 11:01:38.989922 1816435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:01:39.109771 1816435 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 11:01:39.247285 1816435 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 11:01:39.247362 1816435 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 11:01:39.251309 1816435 start.go:564] Will wait 60s for crictl version
	I1123 11:01:39.251420 1816435 ssh_runner.go:195] Run: which crictl
	I1123 11:01:39.254893 1816435 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 11:01:39.286098 1816435 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 11:01:39.286219 1816435 ssh_runner.go:195] Run: containerd --version
	I1123 11:01:39.307296 1816435 ssh_runner.go:195] Run: containerd --version
	I1123 11:01:39.331081 1816435 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 11:01:39.334184 1816435 cli_runner.go:164] Run: docker network inspect newest-cni-268828 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 11:01:39.350249 1816435 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 11:01:39.354062 1816435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:01:39.367134 1816435 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 11:01:39.370209 1816435 kubeadm.go:884] updating cluster {Name:newest-cni-268828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 11:01:39.370372 1816435 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 11:01:39.370462 1816435 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:01:39.396326 1816435 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 11:01:39.396355 1816435 containerd.go:534] Images already preloaded, skipping extraction
	I1123 11:01:39.396418 1816435 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 11:01:39.423162 1816435 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 11:01:39.423214 1816435 cache_images.go:86] Images are preloaded, skipping loading
	I1123 11:01:39.423223 1816435 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 11:01:39.423373 1816435 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-268828 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 11:01:39.423470 1816435 ssh_runner.go:195] Run: sudo crictl info
	I1123 11:01:39.451640 1816435 cni.go:84] Creating CNI manager for ""
	I1123 11:01:39.451662 1816435 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 11:01:39.451682 1816435 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 11:01:39.451734 1816435 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-268828 NodeName:newest-cni-268828 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 11:01:39.451897 1816435 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-268828"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 11:01:39.451985 1816435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 11:01:39.459938 1816435 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 11:01:39.460008 1816435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 11:01:39.468043 1816435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1123 11:01:39.485655 1816435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 11:01:39.502843 1816435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1123 11:01:39.520387 1816435 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 11:01:39.524799 1816435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 11:01:39.536062 1816435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:01:39.712054 1816435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:01:39.735856 1816435 certs.go:69] Setting up /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828 for IP: 192.168.76.2
	I1123 11:01:39.735926 1816435 certs.go:195] generating shared ca certs ...
	I1123 11:01:39.735958 1816435 certs.go:227] acquiring lock for ca certs: {Name:mk3cca888d785818ac92c3c8d4e66a37bae0b8f5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:01:39.736132 1816435 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key
	I1123 11:01:39.736219 1816435 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key
	I1123 11:01:39.736252 1816435 certs.go:257] generating profile certs ...
	I1123 11:01:39.736392 1816435 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/client.key
	I1123 11:01:39.736504 1816435 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/apiserver.key.ebdf4d7d
	I1123 11:01:39.736596 1816435 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/proxy-client.key
	I1123 11:01:39.736754 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem (1338 bytes)
	W1123 11:01:39.736826 1816435 certs.go:480] ignoring /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532_empty.pem, impossibly tiny 0 bytes
	I1123 11:01:39.736858 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 11:01:39.736915 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/ca.pem (1078 bytes)
	I1123 11:01:39.736975 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/cert.pem (1123 bytes)
	I1123 11:01:39.737039 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/key.pem (1675 bytes)
	I1123 11:01:39.737125 1816435 certs.go:484] found cert: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem (1708 bytes)
	I1123 11:01:39.737864 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 11:01:39.774012 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 11:01:39.793521 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 11:01:39.813381 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 11:01:39.833798 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 11:01:39.855901 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 11:01:39.886615 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 11:01:39.922698 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/newest-cni-268828/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 11:01:39.952248 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/certs/1584532.pem --> /usr/share/ca-certificates/1584532.pem (1338 bytes)
	I1123 11:01:39.974127 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/ssl/certs/15845322.pem --> /usr/share/ca-certificates/15845322.pem (1708 bytes)
	I1123 11:01:39.993397 1816435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 11:01:40.025743 1816435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 11:01:40.049830 1816435 ssh_runner.go:195] Run: openssl version
	I1123 11:01:40.058394 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/15845322.pem && ln -fs /usr/share/ca-certificates/15845322.pem /etc/ssl/certs/15845322.pem"
	I1123 11:01:40.068314 1816435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/15845322.pem
	I1123 11:01:40.073301 1816435 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 10:17 /usr/share/ca-certificates/15845322.pem
	I1123 11:01:40.073402 1816435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/15845322.pem
	I1123 11:01:40.117652 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/15845322.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 11:01:40.126193 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 11:01:40.136591 1816435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:01:40.140664 1816435 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 10:10 /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:01:40.140771 1816435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 11:01:40.184569 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 11:01:40.193180 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1584532.pem && ln -fs /usr/share/ca-certificates/1584532.pem /etc/ssl/certs/1584532.pem"
	I1123 11:01:40.201762 1816435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1584532.pem
	I1123 11:01:40.205608 1816435 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 10:17 /usr/share/ca-certificates/1584532.pem
	I1123 11:01:40.205666 1816435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1584532.pem
	I1123 11:01:40.247143 1816435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1584532.pem /etc/ssl/certs/51391683.0"
	I1123 11:01:40.255258 1816435 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 11:01:40.259403 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 11:01:40.301498 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 11:01:40.344708 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 11:01:40.395988 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 11:01:40.464440 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 11:01:40.535122 1816435 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 11:01:40.592291 1816435 kubeadm.go:401] StartCluster: {Name:newest-cni-268828 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-268828 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 11:01:40.592447 1816435 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 11:01:40.592561 1816435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 11:01:40.628734 1816435 cri.go:89] found id: "9b9c5695adc9b34096e7b1153be0af91f64a7358197c4136e6d7cdcb8ee356e7"
	I1123 11:01:40.628806 1816435 cri.go:89] found id: "1e88c729158018c727fa4042547a53fdff2079f1c7ebbb4769d5f7469b29080a"
	I1123 11:01:40.628824 1816435 cri.go:89] found id: "5644a1b40970e031bf73f550647dabd25a735da17ddb0357563e863d6b483b68"
	I1123 11:01:40.628842 1816435 cri.go:89] found id: "e75e6d08ee4d03d06b9fd772ec785f63b5cac83213afac2a42cf9026fc4779a9"
	I1123 11:01:40.628885 1816435 cri.go:89] found id: "ed3c0f3efa1a7cbf7838a8dd0d6c68bea120cbc9adee3f0f4c366f6af82b718a"
	I1123 11:01:40.628906 1816435 cri.go:89] found id: "0834348ab3920104bae33d60d53b6fa926c7d9cdc7c9c7dc945181467dc0a7d1"
	I1123 11:01:40.628931 1816435 cri.go:89] found id: "096ff5530e7ceeaedaaf18ad87cf66bcaea7eb9f9bdbccd13818660c20070473"
	I1123 11:01:40.628948 1816435 cri.go:89] found id: ""
	I1123 11:01:40.629026 1816435 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1123 11:01:40.660773 1816435 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7","pid":825,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7/rootfs","created":"2025-11-23T11:01:40.441594659Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-268828_fc932fa3859a67e46de8ca75a8dabfc8","io.kubernetes.cri.sandbox-memor
y":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-268828","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"fc932fa3859a67e46de8ca75a8dabfc8"},"owner":"root"},{"ociVersion":"1.2.1","id":"9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b","pid":0,"status":"stopped","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b","io.kubernetes.cri.sandbox-l
og-directory":"/var/log/pods/kube-system_etcd-newest-cni-268828_52f3dd0ac01467ef7acdb026602f01ce","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-newest-cni-268828","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"52f3dd0ac01467ef7acdb026602f01ce"},"owner":"root"},{"ociVersion":"1.2.1","id":"e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17","pid":897,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17/rootfs","created":"2025-11-23T11:01:40.5089831Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"2
56","io.kubernetes.cri.sandbox-id":"e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-268828_6b45b66204df9fbb9e1ee3e76da07f0b","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-268828","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"6b45b66204df9fbb9e1ee3e76da07f0b"},"owner":"root"}]
	I1123 11:01:40.660953 1816435 cri.go:126] list returned 3 containers
	I1123 11:01:40.660983 1816435 cri.go:129] container: {ID:9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7 Status:running}
	I1123 11:01:40.661036 1816435 cri.go:131] skipping 9669a1699ae10adf43623be07dcceba5f319ae5866382314f990f38b03a5ffc7 - not in ps
	I1123 11:01:40.661067 1816435 cri.go:129] container: {ID:9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b Status:stopped}
	I1123 11:01:40.661091 1816435 cri.go:131] skipping 9ae061ec6329e08fe41d65e52504688f57842e97287e97c15812c9c8c6d9da4b - not in ps
	I1123 11:01:40.661109 1816435 cri.go:129] container: {ID:e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17 Status:running}
	I1123 11:01:40.661127 1816435 cri.go:131] skipping e6ec02a5b08f362c80cce9e07147158b1b2729b828b882be56af62c0b10dbd17 - not in ps
	I1123 11:01:40.661210 1816435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 11:01:40.672042 1816435 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 11:01:40.672101 1816435 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 11:01:40.672194 1816435 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 11:01:40.693380 1816435 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 11:01:40.694093 1816435 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-268828" does not appear in /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 11:01:40.694427 1816435 kubeconfig.go:62] /home/jenkins/minikube-integration/21968-1582671/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-268828" cluster setting kubeconfig missing "newest-cni-268828" context setting]
	I1123 11:01:40.694983 1816435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:01:40.697056 1816435 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 11:01:40.719338 1816435 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 11:01:40.719421 1816435 kubeadm.go:602] duration metric: took 47.291204ms to restartPrimaryControlPlane
	I1123 11:01:40.719446 1816435 kubeadm.go:403] duration metric: took 127.165668ms to StartCluster
	I1123 11:01:40.719486 1816435 settings.go:142] acquiring lock: {Name:mk2ffa164862318fd53ac563f81d54c15c17157b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:01:40.719578 1816435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 11:01:40.720550 1816435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/kubeconfig: {Name:mkde132fbc4b94966d064dcf2bb5cfef3cdfba0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 11:01:40.720811 1816435 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 11:01:40.721206 1816435 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 11:01:40.721276 1816435 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-268828"
	I1123 11:01:40.721289 1816435 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-268828"
	W1123 11:01:40.721295 1816435 addons.go:248] addon storage-provisioner should already be in state true
	I1123 11:01:40.721315 1816435 host.go:66] Checking if "newest-cni-268828" exists ...
	I1123 11:01:40.721767 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.722233 1816435 config.go:182] Loaded profile config "newest-cni-268828": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 11:01:40.722366 1816435 addons.go:70] Setting metrics-server=true in profile "newest-cni-268828"
	I1123 11:01:40.722398 1816435 addons.go:239] Setting addon metrics-server=true in "newest-cni-268828"
	W1123 11:01:40.722435 1816435 addons.go:248] addon metrics-server should already be in state true
	I1123 11:01:40.722470 1816435 host.go:66] Checking if "newest-cni-268828" exists ...
	I1123 11:01:40.722952 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.725612 1816435 addons.go:70] Setting dashboard=true in profile "newest-cni-268828"
	I1123 11:01:40.725643 1816435 addons.go:239] Setting addon dashboard=true in "newest-cni-268828"
	W1123 11:01:40.725650 1816435 addons.go:248] addon dashboard should already be in state true
	I1123 11:01:40.725673 1816435 host.go:66] Checking if "newest-cni-268828" exists ...
	I1123 11:01:40.726164 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.727994 1816435 out.go:179] * Verifying Kubernetes components...
	I1123 11:01:40.728133 1816435 addons.go:70] Setting default-storageclass=true in profile "newest-cni-268828"
	I1123 11:01:40.728146 1816435 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-268828"
	I1123 11:01:40.728402 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.733297 1816435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 11:01:40.790099 1816435 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 11:01:40.792939 1816435 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:01:40.792959 1816435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 11:01:40.793025 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:40.802452 1816435 addons.go:239] Setting addon default-storageclass=true in "newest-cni-268828"
	W1123 11:01:40.802472 1816435 addons.go:248] addon default-storageclass should already be in state true
	I1123 11:01:40.802498 1816435 host.go:66] Checking if "newest-cni-268828" exists ...
	I1123 11:01:40.802902 1816435 cli_runner.go:164] Run: docker container inspect newest-cni-268828 --format={{.State.Status}}
	I1123 11:01:40.813354 1816435 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 11:01:40.816631 1816435 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 11:01:40.825634 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 11:01:40.825667 1816435 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 11:01:40.825734 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:40.832978 1816435 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1123 11:01:40.836634 1816435 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 11:01:40.836661 1816435 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 11:01:40.836742 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:40.871009 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:40.883826 1816435 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 11:01:40.883848 1816435 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 11:01:40.883910 1816435 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-268828
	I1123 11:01:40.888245 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:40.916380 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:40.921363 1816435 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35294 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/newest-cni-268828/id_rsa Username:docker}
	I1123 11:01:41.100867 1816435 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 11:01:41.309093 1816435 api_server.go:52] waiting for apiserver process to appear ...
	I1123 11:01:41.309167 1816435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 11:01:41.434662 1816435 api_server.go:72] duration metric: took 713.775645ms to wait for apiserver process to appear ...
	I1123 11:01:41.434689 1816435 api_server.go:88] waiting for apiserver healthz status ...
	I1123 11:01:41.434708 1816435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:01:41.441519 1816435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 11:01:41.451465 1816435 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 11:01:41.451488 1816435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1123 11:01:41.497306 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 11:01:41.497330 1816435 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 11:01:41.537459 1816435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 11:01:41.582202 1816435 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 11:01:41.582226 1816435 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 11:01:41.638358 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 11:01:41.638384 1816435 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 11:01:41.734831 1816435 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 11:01:41.734856 1816435 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 11:01:41.765418 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 11:01:41.765444 1816435 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 11:01:41.780541 1816435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 11:01:41.902691 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 11:01:41.902720 1816435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 11:01:42.036738 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 11:01:42.036765 1816435 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 11:01:42.133168 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 11:01:42.133199 1816435 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 11:01:42.279558 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 11:01:42.279587 1816435 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 11:01:42.346197 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 11:01:42.346223 1816435 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 11:01:42.381649 1816435 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:01:42.381682 1816435 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 11:01:42.408745 1816435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 11:01:45.594696 1816435 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 11:01:45.594728 1816435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 11:01:45.594740 1816435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:01:45.952169 1816435 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:01:45.952196 1816435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:01:45.952216 1816435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:01:45.968092 1816435 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:01:45.968125 1816435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:01:46.319564 1816435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.877997341s)
	I1123 11:01:46.435213 1816435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:01:46.443932 1816435 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:01:46.443959 1816435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:01:46.937080 1816435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:01:46.969146 1816435 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 11:01:46.969173 1816435 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 11:01:47.435754 1816435 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 11:01:47.444237 1816435 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 11:01:47.445429 1816435 api_server.go:141] control plane version: v1.34.1
	I1123 11:01:47.445450 1816435 api_server.go:131] duration metric: took 6.010754454s to wait for apiserver health ...
	I1123 11:01:47.445458 1816435 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 11:01:47.459008 1816435 system_pods.go:59] 9 kube-system pods found
	I1123 11:01:47.459041 1816435 system_pods.go:61] "coredns-66bc5c9577-zj8c2" [60d2ed77-2c17-4a11-8559-c7aa4d427063] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:01:47.459050 1816435 system_pods.go:61] "etcd-newest-cni-268828" [9021a8a5-8d7e-4d6c-8f65-d74641ed8dd6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 11:01:47.459055 1816435 system_pods.go:61] "kindnet-5n7pn" [77dd699f-852f-469d-8bfa-f202e1d8c952] Running
	I1123 11:01:47.459061 1816435 system_pods.go:61] "kube-apiserver-newest-cni-268828" [267a219e-d6c2-4c7e-a727-49f6f7d6d0db] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 11:01:47.459067 1816435 system_pods.go:61] "kube-controller-manager-newest-cni-268828" [6105b256-7e84-4a61-ad5b-ca2a6a25ede3] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 11:01:47.459071 1816435 system_pods.go:61] "kube-proxy-g9xhq" [43073d90-a838-4a88-bc1f-9b0712c73f45] Running
	I1123 11:01:47.459078 1816435 system_pods.go:61] "kube-scheduler-newest-cni-268828" [0d858893-86e4-4962-b1c3-3e058efb9fba] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 11:01:47.459082 1816435 system_pods.go:61] "metrics-server-746fcd58dc-kl77h" [f4d7ae89-d115-4e6f-805a-cb65b26853cb] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:01:47.459089 1816435 system_pods.go:61] "storage-provisioner" [ce712584-50ae-4b26-9aa3-e533babb6f4d] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 11:01:47.459095 1816435 system_pods.go:74] duration metric: took 13.631227ms to wait for pod list to return data ...
	I1123 11:01:47.459103 1816435 default_sa.go:34] waiting for default service account to be created ...
	I1123 11:01:47.462639 1816435 default_sa.go:45] found service account: "default"
	I1123 11:01:47.462662 1816435 default_sa.go:55] duration metric: took 3.551713ms for default service account to be created ...
	I1123 11:01:47.462675 1816435 kubeadm.go:587] duration metric: took 6.741793209s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 11:01:47.462691 1816435 node_conditions.go:102] verifying NodePressure condition ...
	I1123 11:01:47.467736 1816435 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 11:01:47.467819 1816435 node_conditions.go:123] node cpu capacity is 2
	I1123 11:01:47.467846 1816435 node_conditions.go:105] duration metric: took 5.149186ms to run NodePressure ...
	I1123 11:01:47.467870 1816435 start.go:242] waiting for startup goroutines ...
	I1123 11:01:49.545630 1816435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.765050935s)
	I1123 11:01:49.545664 1816435 addons.go:495] Verifying addon metrics-server=true in "newest-cni-268828"
	I1123 11:01:49.545750 1816435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (7.136972724s)
	I1123 11:01:49.545883 1816435 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.008398994s)
	I1123 11:01:49.551230 1816435 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-268828 addons enable metrics-server
	
	I1123 11:01:49.555349 1816435 out.go:179] * Enabled addons: default-storageclass, metrics-server, storage-provisioner, dashboard
	I1123 11:01:49.558308 1816435 addons.go:530] duration metric: took 8.837099937s for enable addons: enabled=[default-storageclass metrics-server storage-provisioner dashboard]
	I1123 11:01:49.558351 1816435 start.go:247] waiting for cluster config update ...
	I1123 11:01:49.558364 1816435 start.go:256] writing updated cluster config ...
	I1123 11:01:49.558650 1816435 ssh_runner.go:195] Run: rm -f paused
	I1123 11:01:49.757857 1816435 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 11:01:49.762979 1816435 out.go:179] * Done! kubectl is now configured to use "newest-cni-268828" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	e189ce28e5135       1611cd07b61d5       11 seconds ago       Running             busybox                   0                   dffe386cbb7a1       busybox                                                default
	f22578041e154       ba04bb24b9575       16 seconds ago       Running             storage-provisioner       0                   a71725ffc52ad       storage-provisioner                                    kube-system
	3fd410f0e2a3e       138784d87c9c5       16 seconds ago       Running             coredns                   0                   d4bd47c2d2c6b       coredns-66bc5c9577-k6bmz                               kube-system
	87c2d96a65070       05baa95f5142d       57 seconds ago       Running             kube-proxy                0                   1bdacc4f63830       kube-proxy-5zfbc                                       kube-system
	358bcf6734bf7       b1a8c6f707935       57 seconds ago       Running             kindnet-cni               0                   fa9979b2ba5b0       kindnet-2wbs5                                          kube-system
	03dabe102ae94       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   0044637999614       kube-scheduler-default-k8s-diff-port-071466            kube-system
	623f8b82e9609       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   a463567617873       kube-controller-manager-default-k8s-diff-port-071466   kube-system
	6ecda63483579       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   a06ba656c7698       kube-apiserver-default-k8s-diff-port-071466            kube-system
	ac530913b339c       a1894772a478e       About a minute ago   Running             etcd                      0                   040a8f7203eb4       etcd-default-k8s-diff-port-071466                      kube-system
	
	
	==> containerd <==
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.747764190Z" level=info msg="Container f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.760548431Z" level=info msg="CreateContainer within sandbox \"d4bd47c2d2c6b401a91eb57dfa82239faf692590c589122dab43c2cc4193f0e6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592\""
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.762406091Z" level=info msg="StartContainer for \"3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592\""
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.764112230Z" level=info msg="connecting to shim 3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592" address="unix:///run/containerd/s/2f593413481c888c5a6eba1074941207bbe21b35f4e63729a4369031a10621b8" protocol=ttrpc version=3
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.767292079Z" level=info msg="CreateContainer within sandbox \"a71725ffc52ad275ea9bcbacbd9a99dbc7ab373a3bacda76b171d276a43a0860\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60\""
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.769770187Z" level=info msg="StartContainer for \"f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60\""
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.772755817Z" level=info msg="connecting to shim f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60" address="unix:///run/containerd/s/62d637d39bcef7694c61ae197324ac6e59ccd363192af10e3306d353c7e18dc0" protocol=ttrpc version=3
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.851390005Z" level=info msg="StartContainer for \"3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592\" returns successfully"
	Nov 23 11:01:34 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:34.866281188Z" level=info msg="StartContainer for \"f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60\" returns successfully"
	Nov 23 11:01:37 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:37.191609721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f8b4a07b-84e0-4042-b688-0f75fde332b2,Namespace:default,Attempt:0,}"
	Nov 23 11:01:37 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:37.241347050Z" level=info msg="connecting to shim dffe386cbb7a181ef612be5f63319675f61a1fe631887b0f8aa0c7bad2626711" address="unix:///run/containerd/s/5c814b1bb8e757eb145ce183ae8d9e8cf8715370d46b3ba7f260cb722974d1bd" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 11:01:37 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:37.328809385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f8b4a07b-84e0-4042-b688-0f75fde332b2,Namespace:default,Attempt:0,} returns sandbox id \"dffe386cbb7a181ef612be5f63319675f61a1fe631887b0f8aa0c7bad2626711\""
	Nov 23 11:01:37 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:37.332190050Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.599477995Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.601617091Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937188"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.603987484Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.620106861Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.621026616Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.28879464s"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.621076182Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.634997658Z" level=info msg="CreateContainer within sandbox \"dffe386cbb7a181ef612be5f63319675f61a1fe631887b0f8aa0c7bad2626711\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.651742496Z" level=info msg="Container e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.664137184Z" level=info msg="CreateContainer within sandbox \"dffe386cbb7a181ef612be5f63319675f61a1fe631887b0f8aa0c7bad2626711\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c\""
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.668133817Z" level=info msg="StartContainer for \"e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c\""
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.668961783Z" level=info msg="connecting to shim e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c" address="unix:///run/containerd/s/5c814b1bb8e757eb145ce183ae8d9e8cf8715370d46b3ba7f260cb722974d1bd" protocol=ttrpc version=3
	Nov 23 11:01:39 default-k8s-diff-port-071466 containerd[759]: time="2025-11-23T11:01:39.759888678Z" level=info msg="StartContainer for \"e189ce28e51356ce42963d1befa34acb8a4741d65b76fa98daeb63beb472e34c\" returns successfully"
	
	
	==> coredns [3fd410f0e2a3e0c4840400042a8552e32e4eed36e78ef22f4b9b5d8220bd9592] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55268 - 48427 "HINFO IN 4207838737370628221.1857008942795508872. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.036125072s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-071466
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-071466
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=37270640e5bc1cd4189f05b508feb80c8debef53
	                    minikube.k8s.io/name=default-k8s-diff-port-071466
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T11_00_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 11:00:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-071466
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 11:01:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 11:01:48 +0000   Sun, 23 Nov 2025 11:00:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 11:01:48 +0000   Sun, 23 Nov 2025 11:00:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 11:01:48 +0000   Sun, 23 Nov 2025 11:00:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 11:01:48 +0000   Sun, 23 Nov 2025 11:01:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-071466
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                b03f0beb-f5c3-48a0-8808-a8238f689abb
	  Boot ID:                    09ea91a5-6718-4065-8697-347594dcad09
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-k6bmz                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59s
	  kube-system                 etcd-default-k8s-diff-port-071466                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-2wbs5                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      59s
	  kube-system                 kube-apiserver-default-k8s-diff-port-071466             250m (12%)    0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-071466    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-5zfbc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-default-k8s-diff-port-071466             100m (5%)     0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasSufficientPID
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node default-k8s-diff-port-071466 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                node-controller  Node default-k8s-diff-port-071466 event: Registered Node default-k8s-diff-port-071466 in Controller
	  Normal   NodeReady                17s                kubelet          Node default-k8s-diff-port-071466 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 10:09] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 10:50] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [ac530913b339c54f8e7f77be8b5c7acf50be231563d7a47e777d7d1ed95bc1cb] <==
	{"level":"warn","ts":"2025-11-23T11:00:41.812784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.826096Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.846549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.871687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.885190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.902238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.925095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44622","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.938668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.957916Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.981801Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:41.998768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.018102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.050181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.070586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.096578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.128537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.148055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.165569Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.189257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.231533Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.253269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.266627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.290131Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.315617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T11:00:42.433515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44902","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 11:01:51 up 11:44,  0 user,  load average: 5.83, 4.00, 3.21
	Linux default-k8s-diff-port-071466 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [358bcf6734bf791c2383468f755f233c9a10cd0f2f4cc61a94acc69e292996b3] <==
	I1123 11:00:53.927636       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 11:00:53.927940       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 11:00:53.928084       1 main.go:148] setting mtu 1500 for CNI 
	I1123 11:00:53.928153       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 11:00:53.928165       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T11:00:54Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 11:00:54.130548       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 11:00:54.130574       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 11:00:54.130583       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 11:00:54.130881       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 11:01:24.108952       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 11:01:24.131463       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 11:01:24.131567       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 11:01:24.131645       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 11:01:25.430977       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 11:01:25.431021       1 metrics.go:72] Registering metrics
	I1123 11:01:25.431095       1 controller.go:711] "Syncing nftables rules"
	I1123 11:01:34.115272       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:01:34.115346       1 main.go:301] handling current node
	I1123 11:01:44.110792       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 11:01:44.110832       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6ecda63483579ca7cd61353a106592764baea71552fb8844448159cb7e3ed5ee] <==
	I1123 11:00:43.672821       1 cache.go:39] Caches are synced for autoregister controller
	I1123 11:00:43.681376       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 11:00:43.697400       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:00:43.697686       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 11:00:43.718239       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:00:43.745703       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 11:00:43.886768       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 11:00:44.170518       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 11:00:44.179946       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 11:00:44.181995       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 11:00:45.375004       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 11:00:45.456768       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 11:00:45.577795       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 11:00:45.596580       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 11:00:45.598178       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 11:00:45.605127       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 11:00:46.552254       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 11:00:47.081369       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 11:00:47.103356       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 11:00:47.130115       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 11:00:52.183903       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 11:00:52.490708       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:00:52.504597       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 11:00:52.545777       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 11:01:46.144033       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:48238: use of closed network connection
	
	
	==> kube-controller-manager [623f8b82e960994c44de86650859d7bdc1e47066943c6de9026db352a65cd857] <==
	I1123 11:00:51.616360       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 11:00:51.616387       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 11:00:51.616392       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 11:00:51.616397       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 11:00:51.619305       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 11:00:51.629626       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 11:00:51.636468       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 11:00:51.636798       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 11:00:51.637713       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 11:00:51.637877       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 11:00:51.637933       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 11:00:51.637959       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 11:00:51.637975       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 11:00:51.637992       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 11:00:51.638117       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 11:00:51.644483       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 11:00:51.644809       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 11:00:51.644873       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 11:00:51.644977       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 11:00:51.664796       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-071466" podCIDRs=["10.244.0.0/24"]
	I1123 11:00:51.695809       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:00:51.696042       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 11:00:51.696127       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 11:00:51.759405       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 11:01:36.596194       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [87c2d96a650706cc13ad43c58de9f2a9074d0c00d27858ed68f7da7671714699] <==
	I1123 11:00:54.073784       1 server_linux.go:53] "Using iptables proxy"
	I1123 11:00:54.165725       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 11:00:54.266661       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 11:00:54.266702       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 11:00:54.266779       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 11:00:54.491376       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 11:00:54.491445       1 server_linux.go:132] "Using iptables Proxier"
	I1123 11:00:54.691719       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 11:00:54.692019       1 server.go:527] "Version info" version="v1.34.1"
	I1123 11:00:54.692035       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 11:00:54.737301       1 config.go:200] "Starting service config controller"
	I1123 11:00:54.737324       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 11:00:54.737365       1 config.go:106] "Starting endpoint slice config controller"
	I1123 11:00:54.737370       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 11:00:54.737385       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 11:00:54.737396       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 11:00:54.738122       1 config.go:309] "Starting node config controller"
	I1123 11:00:54.738142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 11:00:54.738148       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 11:00:54.844468       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 11:00:54.844504       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 11:00:54.844515       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [03dabe102ae9498275cd045ad7cb86b8bc606c065986b7682a76a3ce379b7780] <==
	E1123 11:00:43.959801       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:00:43.979759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 11:00:43.979981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 11:00:43.980425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 11:00:43.980538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 11:00:43.980889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 11:00:43.980976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 11:00:43.996076       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 11:00:43.996168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 11:00:43.996205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 11:00:43.996242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 11:00:43.996332       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 11:00:43.996382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 11:00:43.996428       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 11:00:43.996497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 11:00:43.996532       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:00:43.996581       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 11:00:43.998702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 11:00:44.005245       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 11:00:44.817708       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 11:00:44.833240       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 11:00:44.859442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 11:00:44.878583       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1123 11:00:44.889189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1123 11:00:47.351476       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 11:00:48 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:48.430586    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fb1645a744afee3f34fa1455e112e50b-flexvolume-dir\") pod \"kube-controller-manager-default-k8s-diff-port-071466\" (UID: \"fb1645a744afee3f34fa1455e112e50b\") " pod="kube-system/kube-controller-manager-default-k8s-diff-port-071466"
	Nov 23 11:00:48 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:48.430673    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fb1645a744afee3f34fa1455e112e50b-kubeconfig\") pod \"kube-controller-manager-default-k8s-diff-port-071466\" (UID: \"fb1645a744afee3f34fa1455e112e50b\") " pod="kube-system/kube-controller-manager-default-k8s-diff-port-071466"
	Nov 23 11:00:48 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:48.430771    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f340aa4fdadfe900f6b5d328741c3a78-kubeconfig\") pod \"kube-scheduler-default-k8s-diff-port-071466\" (UID: \"f340aa4fdadfe900f6b5d328741c3a78\") " pod="kube-system/kube-scheduler-default-k8s-diff-port-071466"
	Nov 23 11:00:48 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:48.636873    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-071466" podStartSLOduration=2.636853774 podStartE2EDuration="2.636853774s" podCreationTimestamp="2025-11-23 11:00:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:00:48.42474254 +0000 UTC m=+1.401754468" watchObservedRunningTime="2025-11-23 11:00:48.636853774 +0000 UTC m=+1.613865710"
	Nov 23 11:00:51 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:51.714670    1491 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 11:00:51 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:51.715825    1491 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.684368    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a1f31cd-7028-4474-ae74-b50d5307009a-lib-modules\") pod \"kindnet-2wbs5\" (UID: \"5a1f31cd-7028-4474-ae74-b50d5307009a\") " pod="kube-system/kindnet-2wbs5"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.684405    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5a1f31cd-7028-4474-ae74-b50d5307009a-cni-cfg\") pod \"kindnet-2wbs5\" (UID: \"5a1f31cd-7028-4474-ae74-b50d5307009a\") " pod="kube-system/kindnet-2wbs5"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.684450    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x68rl\" (UniqueName: \"kubernetes.io/projected/5a1f31cd-7028-4474-ae74-b50d5307009a-kube-api-access-x68rl\") pod \"kindnet-2wbs5\" (UID: \"5a1f31cd-7028-4474-ae74-b50d5307009a\") " pod="kube-system/kindnet-2wbs5"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.684472    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a1f31cd-7028-4474-ae74-b50d5307009a-xtables-lock\") pod \"kindnet-2wbs5\" (UID: \"5a1f31cd-7028-4474-ae74-b50d5307009a\") " pod="kube-system/kindnet-2wbs5"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.824837    1491 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.887465    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce0d571b-d10b-446a-8824-44e1566eb31f-xtables-lock\") pod \"kube-proxy-5zfbc\" (UID: \"ce0d571b-d10b-446a-8824-44e1566eb31f\") " pod="kube-system/kube-proxy-5zfbc"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.887505    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce0d571b-d10b-446a-8824-44e1566eb31f-lib-modules\") pod \"kube-proxy-5zfbc\" (UID: \"ce0d571b-d10b-446a-8824-44e1566eb31f\") " pod="kube-system/kube-proxy-5zfbc"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.887530    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klqm2\" (UniqueName: \"kubernetes.io/projected/ce0d571b-d10b-446a-8824-44e1566eb31f-kube-api-access-klqm2\") pod \"kube-proxy-5zfbc\" (UID: \"ce0d571b-d10b-446a-8824-44e1566eb31f\") " pod="kube-system/kube-proxy-5zfbc"
	Nov 23 11:00:52 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:52.887558    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ce0d571b-d10b-446a-8824-44e1566eb31f-kube-proxy\") pod \"kube-proxy-5zfbc\" (UID: \"ce0d571b-d10b-446a-8824-44e1566eb31f\") " pod="kube-system/kube-proxy-5zfbc"
	Nov 23 11:00:54 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:54.814032    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5zfbc" podStartSLOduration=2.814011322 podStartE2EDuration="2.814011322s" podCreationTimestamp="2025-11-23 11:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:00:54.812142126 +0000 UTC m=+7.789154078" watchObservedRunningTime="2025-11-23 11:00:54.814011322 +0000 UTC m=+7.791023258"
	Nov 23 11:00:56 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:00:56.022776    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-2wbs5" podStartSLOduration=4.02272623 podStartE2EDuration="4.02272623s" podCreationTimestamp="2025-11-23 11:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:00:54.84430742 +0000 UTC m=+7.821319381" watchObservedRunningTime="2025-11-23 11:00:56.02272623 +0000 UTC m=+8.999738232"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.195108    1491 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.345752    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdl2g\" (UniqueName: \"kubernetes.io/projected/44dabc1e-0b98-4250-861a-5992ede34070-kube-api-access-wdl2g\") pod \"coredns-66bc5c9577-k6bmz\" (UID: \"44dabc1e-0b98-4250-861a-5992ede34070\") " pod="kube-system/coredns-66bc5c9577-k6bmz"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.345811    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/44dabc1e-0b98-4250-861a-5992ede34070-config-volume\") pod \"coredns-66bc5c9577-k6bmz\" (UID: \"44dabc1e-0b98-4250-861a-5992ede34070\") " pod="kube-system/coredns-66bc5c9577-k6bmz"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.345836    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/152263f3-c362-4790-a756-3d028b31e04a-tmp\") pod \"storage-provisioner\" (UID: \"152263f3-c362-4790-a756-3d028b31e04a\") " pod="kube-system/storage-provisioner"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.345853    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swkrj\" (UniqueName: \"kubernetes.io/projected/152263f3-c362-4790-a756-3d028b31e04a-kube-api-access-swkrj\") pod \"storage-provisioner\" (UID: \"152263f3-c362-4790-a756-3d028b31e04a\") " pod="kube-system/storage-provisioner"
	Nov 23 11:01:34 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:34.925794    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k6bmz" podStartSLOduration=42.925774749 podStartE2EDuration="42.925774749s" podCreationTimestamp="2025-11-23 11:00:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:01:34.906133977 +0000 UTC m=+47.883145921" watchObservedRunningTime="2025-11-23 11:01:34.925774749 +0000 UTC m=+47.902786685"
	Nov 23 11:01:36 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:36.878917    1491 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.878886156 podStartE2EDuration="42.878886156s" podCreationTimestamp="2025-11-23 11:00:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 11:01:34.949855205 +0000 UTC m=+47.926867132" watchObservedRunningTime="2025-11-23 11:01:36.878886156 +0000 UTC m=+49.855898092"
	Nov 23 11:01:36 default-k8s-diff-port-071466 kubelet[1491]: I1123 11:01:36.964437    1491 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8drbc\" (UniqueName: \"kubernetes.io/projected/f8b4a07b-84e0-4042-b688-0f75fde332b2-kube-api-access-8drbc\") pod \"busybox\" (UID: \"f8b4a07b-84e0-4042-b688-0f75fde332b2\") " pod="default/busybox"
	
	
	==> storage-provisioner [f22578041e1547be146cd195306a9ab8edbef6b622588da99bb907f65508bb60] <==
	I1123 11:01:34.958309       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 11:01:34.960894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:34.967919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:01:34.968194       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 11:01:34.968508       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071466_d1f06908-43c5-4138-b382-3c1f6f829bd0!
	I1123 11:01:34.973642       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f6687691-e179-4c4e-b02b-d913321bfbad", APIVersion:"v1", ResourceVersion:"472", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-071466_d1f06908-43c5-4138-b382-3c1f6f829bd0 became leader
	W1123 11:01:34.977206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:34.984738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 11:01:35.069512       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-071466_d1f06908-43c5-4138-b382-3c1f6f829bd0!
	W1123 11:01:36.988676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:36.995994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:38.999548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:39.006171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:41.012425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:41.017883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:43.021608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:43.032002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:45.057309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:45.072633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:47.076605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:47.086221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:49.089801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:49.099402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:51.103078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 11:01:51.111822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-071466 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.71s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.39
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.1/json-events 4.86
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.41
18 TestDownloadOnly/v1.34.1/DeleteAll 0.37
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.22
21 TestBinaryMirror 0.62
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 173.66
29 TestAddons/serial/Volcano 39.66
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 8.81
35 TestAddons/parallel/Registry 16.39
36 TestAddons/parallel/RegistryCreds 0.87
37 TestAddons/parallel/Ingress 18.82
38 TestAddons/parallel/InspektorGadget 11.78
39 TestAddons/parallel/MetricsServer 5.78
41 TestAddons/parallel/CSI 51.31
42 TestAddons/parallel/Headlamp 11.35
43 TestAddons/parallel/CloudSpanner 5.63
44 TestAddons/parallel/LocalPath 53.31
45 TestAddons/parallel/NvidiaDevicePlugin 6.58
46 TestAddons/parallel/Yakd 11.86
48 TestAddons/StoppedEnableDisable 12.34
49 TestCertOptions 42.26
50 TestCertExpiration 233.38
52 TestForceSystemdFlag 37.3
53 TestForceSystemdEnv 43.25
54 TestDockerEnvContainerd 48.57
58 TestErrorSpam/setup 34.64
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.14
61 TestErrorSpam/pause 1.65
62 TestErrorSpam/unpause 1.8
63 TestErrorSpam/stop 1.71
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 78.78
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.89
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.59
75 TestFunctional/serial/CacheCmd/cache/add_local 1.23
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.85
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 43.32
84 TestFunctional/serial/ComponentHealth 0.1
85 TestFunctional/serial/LogsCmd 1.48
86 TestFunctional/serial/LogsFileCmd 1.46
87 TestFunctional/serial/InvalidService 4.78
89 TestFunctional/parallel/ConfigCmd 0.5
90 TestFunctional/parallel/DashboardCmd 8.98
91 TestFunctional/parallel/DryRun 0.68
92 TestFunctional/parallel/InternationalLanguage 0.22
93 TestFunctional/parallel/StatusCmd 1.08
97 TestFunctional/parallel/ServiceCmdConnect 8.65
98 TestFunctional/parallel/AddonsCmd 0.26
99 TestFunctional/parallel/PersistentVolumeClaim 24.64
101 TestFunctional/parallel/SSHCmd 0.56
102 TestFunctional/parallel/CpCmd 1.94
104 TestFunctional/parallel/FileSync 0.47
105 TestFunctional/parallel/CertSync 2.05
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.66
113 TestFunctional/parallel/License 0.37
114 TestFunctional/parallel/Version/short 0.07
115 TestFunctional/parallel/Version/components 1.35
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.27
117 TestFunctional/parallel/ImageCommands/ImageListTable 0.29
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
120 TestFunctional/parallel/ImageCommands/ImageBuild 4.21
121 TestFunctional/parallel/ImageCommands/Setup 0.65
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.22
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.18
125 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.38
126 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.61
127 TestFunctional/parallel/ServiceCmd/DeployApp 7.48
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.41
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
130 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
131 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.64
132 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
134 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
135 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
137 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.33
138 TestFunctional/parallel/ServiceCmd/List 0.35
139 TestFunctional/parallel/ServiceCmd/JSONOutput 0.33
140 TestFunctional/parallel/ServiceCmd/HTTPS 0.4
141 TestFunctional/parallel/ServiceCmd/Format 0.38
142 TestFunctional/parallel/ServiceCmd/URL 0.4
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
144 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
150 TestFunctional/parallel/ProfileCmd/profile_list 0.45
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.44
152 TestFunctional/parallel/MountCmd/any-port 8.03
153 TestFunctional/parallel/MountCmd/specific-port 2.21
154 TestFunctional/parallel/MountCmd/VerifyCleanup 2.72
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 200.77
163 TestMultiControlPlane/serial/DeployApp 8.16
164 TestMultiControlPlane/serial/PingHostFromPods 1.57
165 TestMultiControlPlane/serial/AddWorkerNode 30.65
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.05
168 TestMultiControlPlane/serial/CopyFile 20.41
169 TestMultiControlPlane/serial/StopSecondaryNode 12.87
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.6
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.41
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.61
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.57
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
176 TestMultiControlPlane/serial/StopCluster 36.19
177 TestMultiControlPlane/serial/RestartCluster 59.14
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.81
179 TestMultiControlPlane/serial/AddSecondaryNode 84.71
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.08
185 TestJSONOutput/start/Command 81.41
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.7
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.6
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 5.99
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 38.91
211 TestKicCustomNetwork/use_default_bridge_network 35.77
212 TestKicExistingNetwork 35.89
213 TestKicCustomSubnet 33.82
214 TestKicStaticIP 34.38
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 71.08
219 TestMountStart/serial/StartWithMountFirst 5.78
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 8.19
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.29
226 TestMountStart/serial/RestartStopped 7.44
227 TestMountStart/serial/VerifyMountPostStop 0.26
230 TestMultiNode/serial/FreshStart2Nodes 81.65
231 TestMultiNode/serial/DeployApp2Nodes 6.44
232 TestMultiNode/serial/PingHostFrom2Pods 0.94
233 TestMultiNode/serial/AddNode 27.81
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.5
237 TestMultiNode/serial/StopNode 2.43
238 TestMultiNode/serial/StartAfterStop 7.64
239 TestMultiNode/serial/RestartKeepsNodes 72.74
240 TestMultiNode/serial/DeleteNode 5.67
241 TestMultiNode/serial/StopMultiNode 24.12
242 TestMultiNode/serial/RestartMultiNode 48.25
243 TestMultiNode/serial/ValidateNameConflict 39.49
248 TestPreload 122.13
250 TestScheduledStopUnix 107.94
253 TestInsufficientStorage 13.17
254 TestRunningBinaryUpgrade 62.93
256 TestKubernetesUpgrade 348.14
257 TestMissingContainerUpgrade 160.61
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 46.82
261 TestNoKubernetes/serial/StartWithStopK8s 24.75
262 TestNoKubernetes/serial/Start 7.71
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.37
265 TestNoKubernetes/serial/ProfileList 0.86
266 TestNoKubernetes/serial/Stop 2.72
267 TestNoKubernetes/serial/StartNoArgs 7.41
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.35
269 TestStoppedBinaryUpgrade/Setup 0.79
270 TestStoppedBinaryUpgrade/Upgrade 53.95
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
280 TestPause/serial/Start 50.69
281 TestPause/serial/SecondStartNoReconfiguration 7.46
282 TestPause/serial/Pause 0.69
283 TestPause/serial/VerifyStatus 0.32
284 TestPause/serial/Unpause 0.62
285 TestPause/serial/PauseAgain 0.83
286 TestPause/serial/DeletePaused 2.76
287 TestPause/serial/VerifyDeletedResources 15.39
295 TestNetworkPlugins/group/false 4.62
300 TestStartStop/group/old-k8s-version/serial/FirstStart 58.83
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.14
303 TestStartStop/group/old-k8s-version/serial/Stop 12.15
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
305 TestStartStop/group/old-k8s-version/serial/SecondStart 55.48
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
309 TestStartStop/group/old-k8s-version/serial/Pause 3.19
311 TestStartStop/group/no-preload/serial/FirstStart 71.26
313 TestStartStop/group/embed-certs/serial/FirstStart 86.87
315 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
316 TestStartStop/group/no-preload/serial/Stop 12.13
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
318 TestStartStop/group/no-preload/serial/SecondStart 51.43
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.58
321 TestStartStop/group/embed-certs/serial/Stop 12.39
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.19
323 TestStartStop/group/embed-certs/serial/SecondStart 53.84
324 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.02
325 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.14
326 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
327 TestStartStop/group/no-preload/serial/Pause 3.08
329 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 83.4
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.17
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.32
333 TestStartStop/group/embed-certs/serial/Pause 4.12
335 TestStartStop/group/newest-cni/serial/FirstStart 39.63
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.95
338 TestStartStop/group/newest-cni/serial/Stop 1.34
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
340 TestStartStop/group/newest-cni/serial/SecondStart 17.12
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.29
345 TestStartStop/group/newest-cni/serial/Pause 3.88
346 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.91
347 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.49
348 TestNetworkPlugins/group/auto/Start 89.17
349 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
350 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.01
351 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
352 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
353 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
354 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.16
355 TestNetworkPlugins/group/kindnet/Start 85.55
356 TestNetworkPlugins/group/auto/KubeletFlags 0.55
357 TestNetworkPlugins/group/auto/NetCatPod 8.41
358 TestNetworkPlugins/group/auto/DNS 0.24
359 TestNetworkPlugins/group/auto/Localhost 0.19
360 TestNetworkPlugins/group/auto/HairPin 0.18
361 TestNetworkPlugins/group/calico/Start 60.53
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
364 TestNetworkPlugins/group/kindnet/NetCatPod 9.35
365 TestNetworkPlugins/group/kindnet/DNS 0.23
366 TestNetworkPlugins/group/kindnet/Localhost 0.15
367 TestNetworkPlugins/group/kindnet/HairPin 0.17
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.44
370 TestNetworkPlugins/group/calico/NetCatPod 9.44
371 TestNetworkPlugins/group/calico/DNS 0.37
372 TestNetworkPlugins/group/calico/Localhost 0.22
373 TestNetworkPlugins/group/calico/HairPin 0.2
374 TestNetworkPlugins/group/custom-flannel/Start 65.86
375 TestNetworkPlugins/group/enable-default-cni/Start 72.53
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.34
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.31
378 TestNetworkPlugins/group/custom-flannel/DNS 0.16
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.15
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.44
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.42
383 TestNetworkPlugins/group/flannel/Start 60.75
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.32
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.39
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.23
387 TestNetworkPlugins/group/bridge/Start 76.08
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
390 TestNetworkPlugins/group/flannel/NetCatPod 10.4
391 TestNetworkPlugins/group/flannel/DNS 0.17
392 TestNetworkPlugins/group/flannel/Localhost 0.17
393 TestNetworkPlugins/group/flannel/HairPin 0.16
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
395 TestNetworkPlugins/group/bridge/NetCatPod 8.25
396 TestNetworkPlugins/group/bridge/DNS 0.16
397 TestNetworkPlugins/group/bridge/Localhost 0.19
398 TestNetworkPlugins/group/bridge/HairPin 0.29
x
+
TestDownloadOnly/v1.28.0/json-events (5.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-434543 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-434543 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.38781152s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 10:09:53.211689 1584532 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1123 10:09:53.211768 1584532 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-434543
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-434543: exit status 85 (81.503489ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-434543 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-434543 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:09:47
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:09:47.863860 1584538 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:09:47.863991 1584538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:47.863997 1584538 out.go:374] Setting ErrFile to fd 2...
	I1123 10:09:47.864002 1584538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:47.864374 1584538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	W1123 10:09:47.864546 1584538 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21968-1582671/.minikube/config/config.json: open /home/jenkins/minikube-integration/21968-1582671/.minikube/config/config.json: no such file or directory
	I1123 10:09:47.865531 1584538 out.go:368] Setting JSON to true
	I1123 10:09:47.866403 1584538 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39133,"bootTime":1763853455,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:09:47.866502 1584538 start.go:143] virtualization:  
	I1123 10:09:47.872149 1584538 out.go:99] [download-only-434543] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1123 10:09:47.872355 1584538 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 10:09:47.872494 1584538 notify.go:221] Checking for updates...
	I1123 10:09:47.876740 1584538 out.go:171] MINIKUBE_LOCATION=21968
	I1123 10:09:47.880116 1584538 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:09:47.883316 1584538 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:09:47.886510 1584538 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:09:47.889655 1584538 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 10:09:47.895802 1584538 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 10:09:47.896062 1584538 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:09:47.928435 1584538 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:09:47.928545 1584538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:47.994356 1584538 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 10:09:47.984781015 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:47.994466 1584538 docker.go:319] overlay module found
	I1123 10:09:47.997491 1584538 out.go:99] Using the docker driver based on user configuration
	I1123 10:09:47.997539 1584538 start.go:309] selected driver: docker
	I1123 10:09:47.997547 1584538 start.go:927] validating driver "docker" against <nil>
	I1123 10:09:47.997656 1584538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:48.061489 1584538 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 10:09:48.052362782 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:48.061652 1584538 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:09:48.061926 1584538 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 10:09:48.062106 1584538 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 10:09:48.065391 1584538 out.go:171] Using Docker driver with root privileges
	I1123 10:09:48.068397 1584538 cni.go:84] Creating CNI manager for ""
	I1123 10:09:48.068470 1584538 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 10:09:48.068483 1584538 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 10:09:48.068570 1584538 start.go:353] cluster config:
	{Name:download-only-434543 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-434543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:09:48.071609 1584538 out.go:99] Starting "download-only-434543" primary control-plane node in "download-only-434543" cluster
	I1123 10:09:48.071628 1584538 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 10:09:48.074498 1584538 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 10:09:48.074537 1584538 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 10:09:48.074695 1584538 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 10:09:48.091032 1584538 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 10:09:48.091251 1584538 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 10:09:48.091352 1584538 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 10:09:48.130741 1584538 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 10:09:48.130772 1584538 cache.go:65] Caching tarball of preloaded images
	I1123 10:09:48.130953 1584538 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 10:09:48.134431 1584538 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 10:09:48.134466 1584538 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1123 10:09:48.215752 1584538 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1123 10:09:48.215882 1584538 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 10:09:51.956617 1584538 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 10:09:51.957124 1584538 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/download-only-434543/config.json ...
	I1123 10:09:51.957164 1584538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/download-only-434543/config.json: {Name:mkc063911e8de874c2d0f4687dd80f4b9ec27fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 10:09:51.958111 1584538 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 10:09:51.958945 1584538 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-434543 host does not exist
	  To start a cluster, run: "minikube start -p download-only-434543"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-434543
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-728946 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-728946 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (4.860686311s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 10:09:58.498751 1584532 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1123 10:09:58.498787 1584532 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-728946
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-728946: exit status 85 (405.816511ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-434543 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-434543 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ delete  │ -p download-only-434543                                                                                                                                                               │ download-only-434543 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │ 23 Nov 25 10:09 UTC │
	│ start   │ -o=json --download-only -p download-only-728946 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-728946 │ jenkins │ v1.37.0 │ 23 Nov 25 10:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 10:09:53
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 10:09:53.683748 1584733 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:09:53.683992 1584733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:53.684025 1584733 out.go:374] Setting ErrFile to fd 2...
	I1123 10:09:53.684048 1584733 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:09:53.684349 1584733 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:09:53.684776 1584733 out.go:368] Setting JSON to true
	I1123 10:09:53.685622 1584733 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39139,"bootTime":1763853455,"procs":145,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:09:53.685717 1584733 start.go:143] virtualization:  
	I1123 10:09:53.689126 1584733 out.go:99] [download-only-728946] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:09:53.689372 1584733 notify.go:221] Checking for updates...
	I1123 10:09:53.692302 1584733 out.go:171] MINIKUBE_LOCATION=21968
	I1123 10:09:53.695296 1584733 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:09:53.698108 1584733 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:09:53.700853 1584733 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:09:53.703645 1584733 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 10:09:53.709144 1584733 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 10:09:53.709391 1584733 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:09:53.734653 1584733 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:09:53.734762 1584733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:53.795888 1584733 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-23 10:09:53.786751918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:53.795994 1584733 docker.go:319] overlay module found
	I1123 10:09:53.798852 1584733 out.go:99] Using the docker driver based on user configuration
	I1123 10:09:53.798896 1584733 start.go:309] selected driver: docker
	I1123 10:09:53.798905 1584733 start.go:927] validating driver "docker" against <nil>
	I1123 10:09:53.798998 1584733 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:09:53.855848 1584733 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-11-23 10:09:53.846722157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:09:53.855997 1584733 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 10:09:53.856285 1584733 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 10:09:53.856439 1584733 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 10:09:53.859417 1584733 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-728946 host does not exist
	  To start a cluster, run: "minikube start -p download-only-728946"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-728946
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.22s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 10:10:00.560876 1584532 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-698338 --alsologtostderr --binary-mirror http://127.0.0.1:34891 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-698338" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-698338
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-966210
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-966210: exit status 85 (79.535751ms)

                                                
                                                
-- stdout --
	* Profile "addons-966210" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-966210"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-966210
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-966210: exit status 85 (82.220374ms)

                                                
                                                
-- stdout --
	* Profile "addons-966210" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-966210"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (173.66s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-966210 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-966210 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m53.655734102s)
--- PASS: TestAddons/Setup (173.66s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 65.798982ms
addons_test.go:876: volcano-admission stabilized in 65.887259ms
addons_test.go:868: volcano-scheduler stabilized in 66.051357ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-bqdrl" [ce605298-eb13-4dce-bba8-88d65e93e3ba] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.018284581s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-29mp4" [a6ceab4f-a283-4d71-8b1e-e0bbbb1120ae] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.003197059s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-46622" [92849ba4-c1d1-4579-bc2c-6379a43c350d] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.002902439s
addons_test.go:903: (dbg) Run:  kubectl --context addons-966210 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-966210 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-966210 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [5b45e60c-d156-438b-96ef-254a40a4e126] Pending
helpers_test.go:352: "test-job-nginx-0" [5b45e60c-d156-438b-96ef-254a40a4e126] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [5b45e60c-d156-438b-96ef-254a40a4e126] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.007714424s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-966210 addons disable volcano --alsologtostderr -v=1: (12.021203671s)
--- PASS: TestAddons/serial/Volcano (39.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-966210 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-966210 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.81s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-966210 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-966210 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [fbc35149-d991-4788-8ded-65d12b75ca27] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [fbc35149-d991-4788-8ded-65d12b75ca27] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004283737s
addons_test.go:694: (dbg) Run:  kubectl --context addons-966210 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-966210 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-966210 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-966210 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.81s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.355807ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-msxsg" [e1291ba6-0c9d-484f-9960-d565ca9d3255] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003660895s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-jcsdg" [1b7c6d93-e1d2-415b-b6f0-d9adb809e804] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003091905s
addons_test.go:392: (dbg) Run:  kubectl --context addons-966210 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-966210 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-966210 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.399827066s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 ip
2025/11/23 10:14:08 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.39s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.87s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.770106ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-966210
addons_test.go:332: (dbg) Run:  kubectl --context addons-966210 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.87s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.82s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-966210 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-966210 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-966210 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [2e034525-4b43-489c-819f-b3b778d29c25] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [2e034525-4b43-489c-819f-b3b778d29c25] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003714012s
I1123 10:15:27.268622 1584532 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-966210 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-966210 addons disable ingress-dns --alsologtostderr -v=1: (1.340991882s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-966210 addons disable ingress --alsologtostderr -v=1: (7.889796988s)
--- PASS: TestAddons/parallel/Ingress (18.82s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-p2mkg" [59397563-93a2-4f03-86f4-4d88ddd2e086] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00360293s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-966210 addons disable inspektor-gadget --alsologtostderr -v=1: (5.776066692s)
--- PASS: TestAddons/parallel/InspektorGadget (11.78s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.78s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 4.902307ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-r7jdv" [bb92e727-3c14-48be-90d0-1050f53c4d53] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003152929s
addons_test.go:463: (dbg) Run:  kubectl --context addons-966210 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.78s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 10:14:04.629826 1584532 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 10:14:04.633522 1584532 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 10:14:04.633550 1584532 kapi.go:107] duration metric: took 6.693279ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.705306ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-966210 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-966210 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [54b6fa27-e639-4e2f-9c74-2b2754c1250c] Pending
helpers_test.go:352: "task-pv-pod" [54b6fa27-e639-4e2f-9c74-2b2754c1250c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [54b6fa27-e639-4e2f-9c74-2b2754c1250c] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004151346s
addons_test.go:572: (dbg) Run:  kubectl --context addons-966210 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-966210 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-966210 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-966210 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-966210 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-966210 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-966210 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [4b927402-e825-455b-baa6-4649b683a18a] Pending
helpers_test.go:352: "task-pv-pod-restore" [4b927402-e825-455b-baa6-4649b683a18a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [4b927402-e825-455b-baa6-4649b683a18a] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003368428s
addons_test.go:614: (dbg) Run:  kubectl --context addons-966210 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-966210 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-966210 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-966210 addons disable volumesnapshots --alsologtostderr -v=1: (1.043597343s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-966210 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.877076764s)
--- PASS: TestAddons/parallel/CSI (51.31s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-966210 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-966210 --alsologtostderr -v=1: (1.061966136s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-ng8fx" [f4f3e1c6-0850-4362-8c86-3b8ef3cb5e50] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-ng8fx" [f4f3e1c6-0850-4362-8c86-3b8ef3cb5e50] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.004226304s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-2dlnl" [bbf42a3f-6dc3-4417-a3d5-87c61ccc2eac] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004471054s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.31s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-966210 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-966210 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-966210 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [c60d5b6e-3f3a-4842-82c6-71c4fef0e680] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [c60d5b6e-3f3a-4842-82c6-71c4fef0e680] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [c60d5b6e-3f3a-4842-82c6-71c4fef0e680] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003487079s
addons_test.go:967: (dbg) Run:  kubectl --context addons-966210 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 ssh "cat /opt/local-path-provisioner/pvc-0101da82-2c78-4be0-8679-f264ebafb196_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-966210 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-966210 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-966210 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.065448736s)
--- PASS: TestAddons/parallel/LocalPath (53.31s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-wkq2c" [c76a02dc-011f-444b-9744-1a045de93709] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003296661s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.58s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-7ftwn" [794eb671-06cf-47c6-8e1c-b8d1df5f5c5e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.00506778s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-966210 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-966210 addons disable yakd --alsologtostderr -v=1: (5.854943492s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.34s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-966210
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-966210: (12.068385932s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-966210
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-966210
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-966210
--- PASS: TestAddons/StoppedEnableDisable (12.34s)

                                                
                                    
x
+
TestCertOptions (42.26s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-501705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-501705 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (39.369726118s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-501705 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-501705 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-501705 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-501705" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-501705
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-501705: (2.150330595s)
--- PASS: TestCertOptions (42.26s)

                                                
                                    
x
+
TestCertExpiration (233.38s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-679101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-679101 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (42.504962114s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-679101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-679101 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (7.901764605s)
helpers_test.go:175: Cleaning up "cert-expiration-679101" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-679101
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-679101: (2.968880447s)
--- PASS: TestCertExpiration (233.38s)

                                                
                                    
x
+
TestForceSystemdFlag (37.3s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-639619 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1123 10:52:54.960384 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-639619 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.908798808s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-639619 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-639619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-639619
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-639619: (2.072897271s)
--- PASS: TestForceSystemdFlag (37.30s)

                                                
                                    
x
+
TestForceSystemdEnv (43.25s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-479166 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-479166 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.483182974s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-479166 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-479166" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-479166
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-479166: (2.341345875s)
--- PASS: TestForceSystemdEnv (43.25s)

                                                
                                    
x
+
TestDockerEnvContainerd (48.57s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-145484 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-145484 --driver=docker  --container-runtime=containerd: (32.6662838s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-145484"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-145484": (1.074619896s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-aYqWCYd7sPLG/agent.1604003" SSH_AGENT_PID="1604004" DOCKER_HOST=ssh://docker@127.0.0.1:34972 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-aYqWCYd7sPLG/agent.1604003" SSH_AGENT_PID="1604004" DOCKER_HOST=ssh://docker@127.0.0.1:34972 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-aYqWCYd7sPLG/agent.1604003" SSH_AGENT_PID="1604004" DOCKER_HOST=ssh://docker@127.0.0.1:34972 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.225592275s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-aYqWCYd7sPLG/agent.1604003" SSH_AGENT_PID="1604004" DOCKER_HOST=ssh://docker@127.0.0.1:34972 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-145484" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-145484
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-145484: (2.032477909s)
--- PASS: TestDockerEnvContainerd (48.57s)

                                                
                                    
x
+
TestErrorSpam/setup (34.64s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-807338 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-807338 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-807338 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-807338 --driver=docker  --container-runtime=containerd: (34.64201899s)
--- PASS: TestErrorSpam/setup (34.64s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.14s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 status
--- PASS: TestErrorSpam/status (1.14s)

                                                
                                    
x
+
TestErrorSpam/pause (1.65s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 pause
--- PASS: TestErrorSpam/pause (1.65s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (1.71s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 stop: (1.508818058s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-807338 --log_dir /tmp/nospam-807338 stop
--- PASS: TestErrorSpam/stop (1.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21968-1582671/.minikube/files/etc/test/nested/copy/1584532/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (78.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-531629 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1123 10:17:54.966173 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:17:54.973627 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:17:54.984969 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:17:55.006327 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:17:55.047700 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:17:55.129090 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:17:55.290703 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:17:55.612349 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:17:56.254301 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:17:57.535864 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:00.135114 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:05.256704 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:15.498515 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:18:35.979868 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-531629 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m18.775684547s)
--- PASS: TestFunctional/serial/StartWithProxy (78.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.89s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 10:18:49.951659 1584532 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-531629 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-531629 --alsologtostderr -v=8: (6.891641448s)
functional_test.go:678: soft start took 6.893065496s for "functional-531629" cluster.
I1123 10:18:56.843677 1584532 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.89s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-531629 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-531629 cache add registry.k8s.io/pause:3.1: (1.269067912s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-531629 cache add registry.k8s.io/pause:3.3: (1.131546218s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-531629 cache add registry.k8s.io/pause:latest: (1.19243558s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-531629 /tmp/TestFunctionalserialCacheCmdcacheadd_local874722895/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 cache add minikube-local-cache-test:functional-531629
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 cache delete minikube-local-cache-test:functional-531629
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-531629
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (286.75205ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.85s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 kubectl -- --context functional-531629 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-531629 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-531629 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 10:19:16.941282 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-531629 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.316094852s)
functional_test.go:776: restart took 43.316197028s for "functional-531629" cluster.
I1123 10:19:47.828212 1584532 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (43.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-531629 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.10s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-531629 logs: (1.480063506s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 logs --file /tmp/TestFunctionalserialLogsFileCmd357682233/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-531629 logs --file /tmp/TestFunctionalserialLogsFileCmd357682233/001/logs.txt: (1.454782847s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.46s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-531629 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-531629
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-531629: exit status 115 (699.389453ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31531 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-531629 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 config get cpus: exit status 14 (80.590049ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 config get cpus: exit status 14 (97.013349ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-531629 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-531629 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 1621087: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-531629 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-531629 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (402.650034ms)

                                                
                                                
-- stdout --
	* [functional-531629] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:20:34.328556 1620208 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:20:34.332864 1620208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:34.332881 1620208 out.go:374] Setting ErrFile to fd 2...
	I1123 10:20:34.332888 1620208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:34.333221 1620208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:20:34.333675 1620208 out.go:368] Setting JSON to false
	I1123 10:20:34.343551 1620208 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39780,"bootTime":1763853455,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:20:34.343655 1620208 start.go:143] virtualization:  
	I1123 10:20:34.347355 1620208 out.go:179] * [functional-531629] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:20:34.351082 1620208 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:20:34.351533 1620208 notify.go:221] Checking for updates...
	I1123 10:20:34.356880 1620208 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:20:34.359803 1620208 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:20:34.362646 1620208 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:20:34.365565 1620208 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:20:34.372512 1620208 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:20:34.378151 1620208 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:20:34.378759 1620208 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:20:34.427715 1620208 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:20:34.427831 1620208 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:20:34.564049 1620208 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 10:20:34.553317413 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:20:34.564155 1620208 docker.go:319] overlay module found
	I1123 10:20:34.567500 1620208 out.go:179] * Using the docker driver based on existing profile
	I1123 10:20:34.570314 1620208 start.go:309] selected driver: docker
	I1123 10:20:34.570343 1620208 start.go:927] validating driver "docker" against &{Name:functional-531629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-531629 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:20:34.570446 1620208 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:20:34.574128 1620208 out.go:203] 
	W1123 10:20:34.577055 1620208 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 10:20:34.579980 1620208 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-531629 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-531629 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-531629 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (216.646421ms)

                                                
                                                
-- stdout --
	* [functional-531629] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:20:34.918133 1620527 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:20:34.918359 1620527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:34.918391 1620527 out.go:374] Setting ErrFile to fd 2...
	I1123 10:20:34.918412 1620527 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:20:34.920000 1620527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:20:34.920682 1620527 out.go:368] Setting JSON to false
	I1123 10:20:34.921690 1620527 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":39780,"bootTime":1763853455,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:20:34.921794 1620527 start.go:143] virtualization:  
	I1123 10:20:34.924763 1620527 out.go:179] * [functional-531629] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1123 10:20:34.928471 1620527 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:20:34.928578 1620527 notify.go:221] Checking for updates...
	I1123 10:20:34.934505 1620527 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:20:34.937324 1620527 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:20:34.940132 1620527 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:20:34.942914 1620527 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:20:34.945766 1620527 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:20:34.949124 1620527 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:20:34.949758 1620527 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:20:34.972593 1620527 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:20:34.972706 1620527 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:20:35.063889 1620527 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 10:20:35.053848438 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:20:35.064020 1620527 docker.go:319] overlay module found
	I1123 10:20:35.067055 1620527 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 10:20:35.069823 1620527 start.go:309] selected driver: docker
	I1123 10:20:35.069847 1620527 start.go:927] validating driver "docker" against &{Name:functional-531629 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-531629 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 10:20:35.070049 1620527 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:20:35.073608 1620527 out.go:203] 
	W1123 10:20:35.076463 1620527 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 10:20:35.079348 1620527 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-531629 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-531629 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-2gzvk" [8d3d6865-ac8b-4f30-8c34-3f4c80fa8514] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-2gzvk" [8d3d6865-ac8b-4f30-8c34-3f4c80fa8514] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.002751259s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31052
functional_test.go:1680: http://192.168.49.2:31052: success! body:
Request served by hello-node-connect-7d85dfc575-2gzvk

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31052
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [5f07396d-f17f-43d0-a85d-078d61db76ce] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004920941s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-531629 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-531629 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-531629 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-531629 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [7f931754-8d46-4a0a-9a0c-82c139d7d1e4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [7f931754-8d46-4a0a-9a0c-82c139d7d1e4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.00362469s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-531629 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-531629 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-531629 delete -f testdata/storage-provisioner/pod.yaml: (1.229771719s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-531629 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [561f1ea6-0689-4073-a899-9a5807d2424a] Pending
helpers_test.go:352: "sp-pod" [561f1ea6-0689-4073-a899-9a5807d2424a] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004237609s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-531629 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.64s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh -n functional-531629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 cp functional-531629:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd31334125/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh -n functional-531629 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh -n functional-531629 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1584532/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo cat /etc/test/nested/copy/1584532/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1584532.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo cat /etc/ssl/certs/1584532.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1584532.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo cat /usr/share/ca-certificates/1584532.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/15845322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo cat /etc/ssl/certs/15845322.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/15845322.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo cat /usr/share/ca-certificates/15845322.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.05s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-531629 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 ssh "sudo systemctl is-active docker": exit status 1 (338.471045ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 ssh "sudo systemctl is-active crio": exit status 1 (322.838384ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-531629 version -o=json --components: (1.34889525s)
--- PASS: TestFunctional/parallel/Version/components (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-531629 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-531629
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-531629
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-531629 image ls --format short --alsologtostderr:
I1123 10:20:41.113900 1621819 out.go:360] Setting OutFile to fd 1 ...
I1123 10:20:41.114009 1621819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:41.114015 1621819 out.go:374] Setting ErrFile to fd 2...
I1123 10:20:41.114020 1621819 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:41.114489 1621819 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
I1123 10:20:41.115243 1621819 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:41.115394 1621819 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:41.115965 1621819 cli_runner.go:164] Run: docker container inspect functional-531629 --format={{.State.Status}}
I1123 10:20:41.136951 1621819 ssh_runner.go:195] Run: systemctl --version
I1123 10:20:41.137011 1621819 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-531629
I1123 10:20:41.154959 1621819 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34982 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/functional-531629/id_rsa Username:docker}
I1123 10:20:41.262289 1621819 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-531629 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ docker.io/kicbase/echo-server               │ functional-531629  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ docker.io/library/minikube-local-cache-test │ functional-531629  │ sha256:440b3e │ 991B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-531629 image ls --format table --alsologtostderr:
I1123 10:20:44.353788 1622104 out.go:360] Setting OutFile to fd 1 ...
I1123 10:20:44.353898 1622104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:44.353920 1622104 out.go:374] Setting ErrFile to fd 2...
I1123 10:20:44.353929 1622104 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:44.354270 1622104 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
I1123 10:20:44.355375 1622104 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:44.355536 1622104 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:44.356451 1622104 cli_runner.go:164] Run: docker container inspect functional-531629 --format={{.State.Status}}
I1123 10:20:44.383231 1622104 ssh_runner.go:195] Run: systemctl --version
I1123 10:20:44.383286 1622104 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-531629
I1123 10:20:44.405035 1622104 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34982 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/functional-531629/id_rsa Username:docker}
I1123 10:20:44.526356 1622104 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-531629 image ls --format json --alsologtostderr:
[{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDi
gests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8df
e7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size"
:"262191"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-531629","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:440b3e657308f5c4067baf21606aa349246ee02e2eca02b5984fdeb0db9eb5ec","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-531629"],"size":"991"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/sto
rage-provisioner:v5"],"size":"8034419"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-531629 image ls --format json --alsologtostderr:
I1123 10:20:44.121554 1622066 out.go:360] Setting OutFile to fd 1 ...
I1123 10:20:44.121819 1622066 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:44.121833 1622066 out.go:374] Setting ErrFile to fd 2...
I1123 10:20:44.121840 1622066 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:44.122144 1622066 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
I1123 10:20:44.122858 1622066 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:44.123035 1622066 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:44.123671 1622066 cli_runner.go:164] Run: docker container inspect functional-531629 --format={{.State.Status}}
I1123 10:20:44.140653 1622066 ssh_runner.go:195] Run: systemctl --version
I1123 10:20:44.140714 1622066 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-531629
I1123 10:20:44.158728 1622066 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34982 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/functional-531629/id_rsa Username:docker}
I1123 10:20:44.266314 1622066 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-531629 image ls --format yaml --alsologtostderr:
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-531629
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:440b3e657308f5c4067baf21606aa349246ee02e2eca02b5984fdeb0db9eb5ec
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-531629
size: "991"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-531629 image ls --format yaml --alsologtostderr:
I1123 10:20:41.375780 1621857 out.go:360] Setting OutFile to fd 1 ...
I1123 10:20:41.375925 1621857 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:41.375931 1621857 out.go:374] Setting ErrFile to fd 2...
I1123 10:20:41.375936 1621857 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:41.376222 1621857 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
I1123 10:20:41.376862 1621857 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:41.376975 1621857 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:41.377577 1621857 cli_runner.go:164] Run: docker container inspect functional-531629 --format={{.State.Status}}
I1123 10:20:41.397025 1621857 ssh_runner.go:195] Run: systemctl --version
I1123 10:20:41.397090 1621857 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-531629
I1123 10:20:41.427149 1621857 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34982 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/functional-531629/id_rsa Username:docker}
I1123 10:20:41.534877 1621857 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 ssh pgrep buildkitd: exit status 1 (350.257015ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image build -t localhost/my-image:functional-531629 testdata/build --alsologtostderr
2025/11/23 10:20:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-531629 image build -t localhost/my-image:functional-531629 testdata/build --alsologtostderr: (3.628994322s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-531629 image build -t localhost/my-image:functional-531629 testdata/build --alsologtostderr:
I1123 10:20:41.994843 1621979 out.go:360] Setting OutFile to fd 1 ...
I1123 10:20:42.003913 1621979 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:42.003999 1621979 out.go:374] Setting ErrFile to fd 2...
I1123 10:20:42.004022 1621979 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 10:20:42.004393 1621979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
I1123 10:20:42.005229 1621979 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:42.008431 1621979 config.go:182] Loaded profile config "functional-531629": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 10:20:42.009189 1621979 cli_runner.go:164] Run: docker container inspect functional-531629 --format={{.State.Status}}
I1123 10:20:42.035459 1621979 ssh_runner.go:195] Run: systemctl --version
I1123 10:20:42.035530 1621979 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-531629
I1123 10:20:42.056377 1621979 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34982 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/functional-531629/id_rsa Username:docker}
I1123 10:20:42.173444 1621979 build_images.go:162] Building image from path: /tmp/build.1689026072.tar
I1123 10:20:42.173571 1621979 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 10:20:42.189364 1621979 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1689026072.tar
I1123 10:20:42.196619 1621979 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1689026072.tar: stat -c "%s %y" /var/lib/minikube/build/build.1689026072.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1689026072.tar': No such file or directory
I1123 10:20:42.196706 1621979 ssh_runner.go:362] scp /tmp/build.1689026072.tar --> /var/lib/minikube/build/build.1689026072.tar (3072 bytes)
I1123 10:20:42.236572 1621979 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1689026072
I1123 10:20:42.247826 1621979 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1689026072 -xf /var/lib/minikube/build/build.1689026072.tar
I1123 10:20:42.260659 1621979 containerd.go:394] Building image: /var/lib/minikube/build/build.1689026072
I1123 10:20:42.260810 1621979 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1689026072 --local dockerfile=/var/lib/minikube/build/build.1689026072 --output type=image,name=localhost/my-image:functional-531629
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.4s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.7s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:4fe042f47ef0b92208af8b49b87259c8a26f3c5ccdac55a76a76883a5deefb6f 0.0s done
#8 exporting config sha256:c8f48c91597a41142cb93e84464e5733a94cbc3ce1becba81f0b702749531938 0.0s done
#8 naming to localhost/my-image:functional-531629 done
#8 DONE 0.2s
I1123 10:20:45.523288 1621979 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1689026072 --local dockerfile=/var/lib/minikube/build/build.1689026072 --output type=image,name=localhost/my-image:functional-531629: (3.262428547s)
I1123 10:20:45.523358 1621979 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1689026072
I1123 10:20:45.532112 1621979 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1689026072.tar
I1123 10:20:45.545210 1621979 build_images.go:218] Built localhost/my-image:functional-531629 from /tmp/build.1689026072.tar
I1123 10:20:45.545258 1621979 build_images.go:134] succeeded building to: functional-531629
I1123 10:20:45.545264 1621979 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-531629
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image load --daemon kicbase/echo-server:functional-531629 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-531629 image load --daemon kicbase/echo-server:functional-531629 --alsologtostderr: (1.108734184s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image load --daemon kicbase/echo-server:functional-531629 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-arm64 -p functional-531629 image load --daemon kicbase/echo-server:functional-531629 --alsologtostderr: (1.219446909s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.61s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-531629 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-531629 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-w5chj" [15d4a5dc-3620-4334-abff-5fa3cca938a6] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-w5chj" [15d4a5dc-3620-4334-abff-5fa3cca938a6] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004098713s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-531629
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image load --daemon kicbase/echo-server:functional-531629 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.41s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image save kicbase/echo-server:functional-531629 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image rm kicbase/echo-server:functional-531629 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-531629
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 image save --daemon kicbase/echo-server:functional-531629 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-531629
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-531629 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-531629 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-531629 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1617608: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-531629 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-531629 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-531629 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [224e5364-a249-4c4e-b0ee-7a748ac1f53b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [224e5364-a249-4c4e-b0ee-7a748ac1f53b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003717244s
I1123 10:20:14.500750 1584532 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 service list -o json
functional_test.go:1504: Took "332.943419ms" to run "out/minikube-linux-arm64 -p functional-531629 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31443
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31443
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-531629 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.156.65 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-531629 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "381.80319ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "67.192654ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "374.089002ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "60.902941ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdany-port950206457/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763893226055256569" to /tmp/TestFunctionalparallelMountCmdany-port950206457/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763893226055256569" to /tmp/TestFunctionalparallelMountCmdany-port950206457/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763893226055256569" to /tmp/TestFunctionalparallelMountCmdany-port950206457/001/test-1763893226055256569
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (356.878089ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 10:20:26.413851 1584532 retry.go:31] will retry after 432.429078ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 10:20 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 10:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 10:20 test-1763893226055256569
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh cat /mount-9p/test-1763893226055256569
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-531629 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [534a4222-b5f8-4ce2-b6aa-12d68a4a3701] Pending
helpers_test.go:352: "busybox-mount" [534a4222-b5f8-4ce2-b6aa-12d68a4a3701] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [534a4222-b5f8-4ce2-b6aa-12d68a4a3701] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [534a4222-b5f8-4ce2-b6aa-12d68a4a3701] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003770832s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-531629 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdany-port950206457/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdspecific-port4181658861/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (441.531864ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 10:20:34.524080 1584532 retry.go:31] will retry after 465.238907ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdspecific-port4181658861/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 ssh "sudo umount -f /mount-9p": exit status 1 (361.459651ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-531629 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdspecific-port4181658861/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.21s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2074164123/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2074164123/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2074164123/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T" /mount1: exit status 1 (1.076743745s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 10:20:37.373408 1584532 retry.go:31] will retry after 524.123895ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-531629 ssh "findmnt -T" /mount3
E1123 10:20:38.863659 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-531629 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2074164123/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2074164123/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-531629 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2074164123/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.72s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-531629
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-531629
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-531629
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (200.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1123 10:22:54.961143 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:23:22.704993 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m19.885751077s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (200.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 kubectl -- rollout status deployment/busybox: (5.322044191s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-7g2nj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-b25cf -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-sr2kw -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-7g2nj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-b25cf -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-sr2kw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-7g2nj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-b25cf -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-sr2kw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-7g2nj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-7g2nj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-b25cf -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-b25cf -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-sr2kw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 kubectl -- exec busybox-7b57f96db7-sr2kw -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 node add --alsologtostderr -v 5: (29.611154914s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5: (1.034069073s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (30.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-711488 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.049362233s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 status --output json --alsologtostderr -v 5: (1.039028702s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp testdata/cp-test.txt ha-711488:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4039536788/001/cp-test_ha-711488.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488:/home/docker/cp-test.txt ha-711488-m02:/home/docker/cp-test_ha-711488_ha-711488-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m02 "sudo cat /home/docker/cp-test_ha-711488_ha-711488-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488:/home/docker/cp-test.txt ha-711488-m03:/home/docker/cp-test_ha-711488_ha-711488-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m03 "sudo cat /home/docker/cp-test_ha-711488_ha-711488-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488:/home/docker/cp-test.txt ha-711488-m04:/home/docker/cp-test_ha-711488_ha-711488-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m04 "sudo cat /home/docker/cp-test_ha-711488_ha-711488-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp testdata/cp-test.txt ha-711488-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4039536788/001/cp-test_ha-711488-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m02:/home/docker/cp-test.txt ha-711488:/home/docker/cp-test_ha-711488-m02_ha-711488.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488 "sudo cat /home/docker/cp-test_ha-711488-m02_ha-711488.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m02:/home/docker/cp-test.txt ha-711488-m03:/home/docker/cp-test_ha-711488-m02_ha-711488-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m03 "sudo cat /home/docker/cp-test_ha-711488-m02_ha-711488-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m02:/home/docker/cp-test.txt ha-711488-m04:/home/docker/cp-test_ha-711488-m02_ha-711488-m04.txt
E1123 10:25:00.682708 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:25:00.689125 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:25:00.700469 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:25:00.721852 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:25:00.763173 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:25:00.844593 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:25:01.005947 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m02 "sudo cat /home/docker/cp-test.txt"
E1123 10:25:01.328726 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m04 "sudo cat /home/docker/cp-test_ha-711488-m02_ha-711488-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp testdata/cp-test.txt ha-711488-m03:/home/docker/cp-test.txt
E1123 10:25:01.970518 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4039536788/001/cp-test_ha-711488-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m03:/home/docker/cp-test.txt ha-711488:/home/docker/cp-test_ha-711488-m03_ha-711488.txt
E1123 10:25:03.252598 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488 "sudo cat /home/docker/cp-test_ha-711488-m03_ha-711488.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m03:/home/docker/cp-test.txt ha-711488-m02:/home/docker/cp-test_ha-711488-m03_ha-711488-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m02 "sudo cat /home/docker/cp-test_ha-711488-m03_ha-711488-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m03:/home/docker/cp-test.txt ha-711488-m04:/home/docker/cp-test_ha-711488-m03_ha-711488-m04.txt
E1123 10:25:05.813943 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m04 "sudo cat /home/docker/cp-test_ha-711488-m03_ha-711488-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp testdata/cp-test.txt ha-711488-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4039536788/001/cp-test_ha-711488-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m04:/home/docker/cp-test.txt ha-711488:/home/docker/cp-test_ha-711488-m04_ha-711488.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488 "sudo cat /home/docker/cp-test_ha-711488-m04_ha-711488.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m04:/home/docker/cp-test.txt ha-711488-m02:/home/docker/cp-test_ha-711488-m04_ha-711488-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m02 "sudo cat /home/docker/cp-test_ha-711488-m04_ha-711488-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 cp ha-711488-m04:/home/docker/cp-test.txt ha-711488-m03:/home/docker/cp-test_ha-711488-m04_ha-711488-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m04 "sudo cat /home/docker/cp-test.txt"
E1123 10:25:10.935876 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 ssh -n ha-711488-m03 "sudo cat /home/docker/cp-test_ha-711488-m04_ha-711488-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 node stop m02 --alsologtostderr -v 5
E1123 10:25:21.177207 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 node stop m02 --alsologtostderr -v 5: (12.08060571s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5: exit status 7 (793.620817ms)

                                                
                                                
-- stdout --
	ha-711488
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-711488-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-711488-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-711488-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:25:23.493482 1638467 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:25:23.493594 1638467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:25:23.493604 1638467 out.go:374] Setting ErrFile to fd 2...
	I1123 10:25:23.493610 1638467 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:25:23.493872 1638467 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:25:23.494044 1638467 out.go:368] Setting JSON to false
	I1123 10:25:23.494076 1638467 mustload.go:66] Loading cluster: ha-711488
	I1123 10:25:23.494127 1638467 notify.go:221] Checking for updates...
	I1123 10:25:23.494505 1638467 config.go:182] Loaded profile config "ha-711488": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:25:23.494522 1638467 status.go:174] checking status of ha-711488 ...
	I1123 10:25:23.495406 1638467 cli_runner.go:164] Run: docker container inspect ha-711488 --format={{.State.Status}}
	I1123 10:25:23.516300 1638467 status.go:371] ha-711488 host status = "Running" (err=<nil>)
	I1123 10:25:23.516324 1638467 host.go:66] Checking if "ha-711488" exists ...
	I1123 10:25:23.516610 1638467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-711488
	I1123 10:25:23.547604 1638467 host.go:66] Checking if "ha-711488" exists ...
	I1123 10:25:23.547966 1638467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:25:23.548038 1638467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-711488
	I1123 10:25:23.567064 1638467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34987 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/ha-711488/id_rsa Username:docker}
	I1123 10:25:23.672876 1638467 ssh_runner.go:195] Run: systemctl --version
	I1123 10:25:23.679134 1638467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:25:23.692730 1638467 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:25:23.752392 1638467 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-23 10:25:23.742997272 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:25:23.753607 1638467 kubeconfig.go:125] found "ha-711488" server: "https://192.168.49.254:8443"
	I1123 10:25:23.753649 1638467 api_server.go:166] Checking apiserver status ...
	I1123 10:25:23.753701 1638467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:25:23.767973 1638467 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1418/cgroup
	I1123 10:25:23.777601 1638467 api_server.go:182] apiserver freezer: "10:freezer:/docker/1c52707d7b2854790f4af31fa6d2c4b71f2cc1a2af88b2112c9c80c4b9c7afa8/kubepods/burstable/pod221493aef04c96d804ed40fb764a427a/3a868d3068d502f28aae24a2d3bf1b4f860531149edb504745b5c92841fc9d44"
	I1123 10:25:23.777682 1638467 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1c52707d7b2854790f4af31fa6d2c4b71f2cc1a2af88b2112c9c80c4b9c7afa8/kubepods/burstable/pod221493aef04c96d804ed40fb764a427a/3a868d3068d502f28aae24a2d3bf1b4f860531149edb504745b5c92841fc9d44/freezer.state
	I1123 10:25:23.785187 1638467 api_server.go:204] freezer state: "THAWED"
	I1123 10:25:23.785216 1638467 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 10:25:23.795157 1638467 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 10:25:23.795335 1638467 status.go:463] ha-711488 apiserver status = Running (err=<nil>)
	I1123 10:25:23.795354 1638467 status.go:176] ha-711488 status: &{Name:ha-711488 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:25:23.795373 1638467 status.go:174] checking status of ha-711488-m02 ...
	I1123 10:25:23.795695 1638467 cli_runner.go:164] Run: docker container inspect ha-711488-m02 --format={{.State.Status}}
	I1123 10:25:23.813369 1638467 status.go:371] ha-711488-m02 host status = "Stopped" (err=<nil>)
	I1123 10:25:23.813394 1638467 status.go:384] host is not running, skipping remaining checks
	I1123 10:25:23.813401 1638467 status.go:176] ha-711488-m02 status: &{Name:ha-711488-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:25:23.813420 1638467 status.go:174] checking status of ha-711488-m03 ...
	I1123 10:25:23.813745 1638467 cli_runner.go:164] Run: docker container inspect ha-711488-m03 --format={{.State.Status}}
	I1123 10:25:23.835662 1638467 status.go:371] ha-711488-m03 host status = "Running" (err=<nil>)
	I1123 10:25:23.835708 1638467 host.go:66] Checking if "ha-711488-m03" exists ...
	I1123 10:25:23.836177 1638467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-711488-m03
	I1123 10:25:23.857403 1638467 host.go:66] Checking if "ha-711488-m03" exists ...
	I1123 10:25:23.857720 1638467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:25:23.857768 1638467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-711488-m03
	I1123 10:25:23.878016 1638467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34997 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/ha-711488-m03/id_rsa Username:docker}
	I1123 10:25:23.989382 1638467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:25:24.008837 1638467 kubeconfig.go:125] found "ha-711488" server: "https://192.168.49.254:8443"
	I1123 10:25:24.008867 1638467 api_server.go:166] Checking apiserver status ...
	I1123 10:25:24.008937 1638467 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:25:24.021976 1638467 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup
	I1123 10:25:24.031711 1638467 api_server.go:182] apiserver freezer: "10:freezer:/docker/ef23fe7e7b90b4e63bef452c6afee8fc423e9310e2f438b23d8ffde8110a03aa/kubepods/burstable/pod7728d806c1368084d5fe4e4b12961dc2/806884dd9021ffa3ea9e5558114ac0056a61e810d2f371fd4ab29ef9044432b3"
	I1123 10:25:24.031783 1638467 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ef23fe7e7b90b4e63bef452c6afee8fc423e9310e2f438b23d8ffde8110a03aa/kubepods/burstable/pod7728d806c1368084d5fe4e4b12961dc2/806884dd9021ffa3ea9e5558114ac0056a61e810d2f371fd4ab29ef9044432b3/freezer.state
	I1123 10:25:24.040325 1638467 api_server.go:204] freezer state: "THAWED"
	I1123 10:25:24.040355 1638467 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 10:25:24.048532 1638467 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 10:25:24.048570 1638467 status.go:463] ha-711488-m03 apiserver status = Running (err=<nil>)
	I1123 10:25:24.048580 1638467 status.go:176] ha-711488-m03 status: &{Name:ha-711488-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:25:24.048597 1638467 status.go:174] checking status of ha-711488-m04 ...
	I1123 10:25:24.048916 1638467 cli_runner.go:164] Run: docker container inspect ha-711488-m04 --format={{.State.Status}}
	I1123 10:25:24.073674 1638467 status.go:371] ha-711488-m04 host status = "Running" (err=<nil>)
	I1123 10:25:24.073700 1638467 host.go:66] Checking if "ha-711488-m04" exists ...
	I1123 10:25:24.073992 1638467 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-711488-m04
	I1123 10:25:24.092086 1638467 host.go:66] Checking if "ha-711488-m04" exists ...
	I1123 10:25:24.092427 1638467 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:25:24.092490 1638467 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-711488-m04
	I1123 10:25:24.110189 1638467 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35002 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/ha-711488-m04/id_rsa Username:docker}
	I1123 10:25:24.216484 1638467 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:25:24.230401 1638467 status.go:176] ha-711488-m04 status: &{Name:ha-711488-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 node start m02 --alsologtostderr -v 5: (11.904470744s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5: (1.572883588s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.408474855s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 stop --alsologtostderr -v 5
E1123 10:25:41.658536 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 stop --alsologtostderr -v 5: (37.491744749s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 start --wait true --alsologtostderr -v 5
E1123 10:26:22.620158 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 start --wait true --alsologtostderr -v 5: (1m1.970626524s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 node delete m03 --alsologtostderr -v 5: (9.562578081s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 stop --alsologtostderr -v 5
E1123 10:27:44.543390 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:27:54.960690 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 stop --alsologtostderr -v 5: (36.079905316s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5: exit status 7 (113.299277ms)

                                                
                                                
-- stdout --
	ha-711488
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-711488-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-711488-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:28:07.148480 1653311 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:28:07.148642 1653311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:28:07.148655 1653311 out.go:374] Setting ErrFile to fd 2...
	I1123 10:28:07.148660 1653311 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:28:07.148915 1653311 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:28:07.149093 1653311 out.go:368] Setting JSON to false
	I1123 10:28:07.149131 1653311 mustload.go:66] Loading cluster: ha-711488
	I1123 10:28:07.149237 1653311 notify.go:221] Checking for updates...
	I1123 10:28:07.149547 1653311 config.go:182] Loaded profile config "ha-711488": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:28:07.149565 1653311 status.go:174] checking status of ha-711488 ...
	I1123 10:28:07.150361 1653311 cli_runner.go:164] Run: docker container inspect ha-711488 --format={{.State.Status}}
	I1123 10:28:07.168739 1653311 status.go:371] ha-711488 host status = "Stopped" (err=<nil>)
	I1123 10:28:07.168763 1653311 status.go:384] host is not running, skipping remaining checks
	I1123 10:28:07.168770 1653311 status.go:176] ha-711488 status: &{Name:ha-711488 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:28:07.168798 1653311 status.go:174] checking status of ha-711488-m02 ...
	I1123 10:28:07.169102 1653311 cli_runner.go:164] Run: docker container inspect ha-711488-m02 --format={{.State.Status}}
	I1123 10:28:07.189051 1653311 status.go:371] ha-711488-m02 host status = "Stopped" (err=<nil>)
	I1123 10:28:07.189074 1653311 status.go:384] host is not running, skipping remaining checks
	I1123 10:28:07.189093 1653311 status.go:176] ha-711488-m02 status: &{Name:ha-711488-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:28:07.189114 1653311 status.go:174] checking status of ha-711488-m04 ...
	I1123 10:28:07.189412 1653311 cli_runner.go:164] Run: docker container inspect ha-711488-m04 --format={{.State.Status}}
	I1123 10:28:07.210585 1653311 status.go:371] ha-711488-m04 host status = "Stopped" (err=<nil>)
	I1123 10:28:07.210606 1653311 status.go:384] host is not running, skipping remaining checks
	I1123 10:28:07.210613 1653311 status.go:176] ha-711488-m04 status: &{Name:ha-711488-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (59.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (58.15901862s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (59.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 node add --control-plane --alsologtostderr -v 5
E1123 10:30:00.682263 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:30:28.384783 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 node add --control-plane --alsologtostderr -v 5: (1m23.591995669s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-711488 status --alsologtostderr -v 5: (1.120062026s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.077071426s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.08s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-286835 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-286835 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m21.411167095s)
--- PASS: TestJSONOutput/start/Command (81.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.7s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-286835 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.70s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-286835 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.99s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-286835 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-286835 --output=json --user=testUser: (5.989285913s)
--- PASS: TestJSONOutput/stop/Command (5.99s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-156219 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-156219 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (89.396625ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"138c9944-c967-4fa0-a38a-ac0777c5e59f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-156219] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0aaddf3f-551d-42cd-956b-34fe07eca5d3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21968"}}
	{"specversion":"1.0","id":"52ab0a50-27e5-4ad1-acd3-be3448c88bbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0d073e7c-d12b-45ec-aa6b-ad98c9fbc1fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig"}}
	{"specversion":"1.0","id":"a023c13c-29c5-4e28-bfd1-da21d899bd63","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube"}}
	{"specversion":"1.0","id":"f0b4c901-2012-4e61-a9e4-819996455d83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"88343eac-eea6-49a1-ae81-730b8250b00a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"14f0c015-e7c4-4bc1-89b4-f103047bf51a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-156219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-156219
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (38.91s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-021083 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-021083 --network=: (36.710345111s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-021083" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-021083
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-021083: (2.17742848s)
--- PASS: TestKicCustomNetwork/create_custom_network (38.91s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.77s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-132925 --network=bridge
E1123 10:32:54.960224 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-132925 --network=bridge: (33.691908245s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-132925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-132925
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-132925: (2.048574432s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.77s)

                                                
                                    
x
+
TestKicExistingNetwork (35.89s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 10:33:29.756899 1584532 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 10:33:29.773484 1584532 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 10:33:29.773574 1584532 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 10:33:29.773598 1584532 cli_runner.go:164] Run: docker network inspect existing-network
W1123 10:33:29.791160 1584532 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 10:33:29.791259 1584532 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 10:33:29.791276 1584532 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 10:33:29.791408 1584532 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 10:33:29.816705 1584532 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-e44f782e1ead IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:7a:ae:ef:b1:2b:de} reservation:<nil>}
I1123 10:33:29.817005 1584532 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bc3f20}
I1123 10:33:29.817030 1584532 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 10:33:29.817081 1584532 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 10:33:29.876101 1584532 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-124461 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-124461 --network=existing-network: (33.631980933s)
helpers_test.go:175: Cleaning up "existing-network-124461" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-124461
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-124461: (2.110533646s)
I1123 10:34:05.634943 1584532 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.89s)

                                                
                                    
x
+
TestKicCustomSubnet (33.82s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-716040 --subnet=192.168.60.0/24
E1123 10:34:18.067331 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-716040 --subnet=192.168.60.0/24: (31.640805912s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-716040 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-716040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-716040
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-716040: (2.147616277s)
--- PASS: TestKicCustomSubnet (33.82s)

                                                
                                    
x
+
TestKicStaticIP (34.38s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-433631 --static-ip=192.168.200.200
E1123 10:35:00.685725 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-433631 --static-ip=192.168.200.200: (32.052828583s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-433631 ip
helpers_test.go:175: Cleaning up "static-ip-433631" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-433631
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-433631: (2.171429382s)
--- PASS: TestKicStaticIP (34.38s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (71.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-511084 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-511084 --driver=docker  --container-runtime=containerd: (31.623422124s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-513570 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-513570 --driver=docker  --container-runtime=containerd: (33.546209954s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-511084
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-513570
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-513570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-513570
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-513570: (2.043555495s)
helpers_test.go:175: Cleaning up "first-511084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-511084
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-511084: (2.435510509s)
--- PASS: TestMinikubeProfile (71.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.78s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-353759 --memory=3072 --mount-string /tmp/TestMountStartserial1584764754/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-353759 --memory=3072 --mount-string /tmp/TestMountStartserial1584764754/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (4.777916483s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.78s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-353759 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.19s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-355580 --memory=3072 --mount-string /tmp/TestMountStartserial1584764754/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-355580 --memory=3072 --mount-string /tmp/TestMountStartserial1584764754/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.192597009s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.19s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-355580 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-353759 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-353759 --alsologtostderr -v=5: (1.706234823s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-355580 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-355580
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-355580: (1.288011171s)
--- PASS: TestMountStart/serial/Stop (1.29s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.44s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-355580
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-355580: (6.439953141s)
--- PASS: TestMountStart/serial/RestartStopped (7.44s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-355580 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (81.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048185 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1123 10:37:54.960408 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-048185 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (1m21.119457418s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (81.65s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-048185 -- rollout status deployment/busybox: (4.611609695s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-9778s -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-xh7bx -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-9778s -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-xh7bx -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-9778s -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-xh7bx -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-9778s -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-9778s -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-xh7bx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048185 -- exec busybox-7b57f96db7-xh7bx -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.94s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-048185 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-048185 -v=5 --alsologtostderr: (27.115673561s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.81s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-048185 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp testdata/cp-test.txt multinode-048185:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp multinode-048185:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2877069304/001/cp-test_multinode-048185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp multinode-048185:/home/docker/cp-test.txt multinode-048185-m02:/home/docker/cp-test_multinode-048185_multinode-048185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m02 "sudo cat /home/docker/cp-test_multinode-048185_multinode-048185-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp multinode-048185:/home/docker/cp-test.txt multinode-048185-m03:/home/docker/cp-test_multinode-048185_multinode-048185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m03 "sudo cat /home/docker/cp-test_multinode-048185_multinode-048185-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp testdata/cp-test.txt multinode-048185-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp multinode-048185-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2877069304/001/cp-test_multinode-048185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp multinode-048185-m02:/home/docker/cp-test.txt multinode-048185:/home/docker/cp-test_multinode-048185-m02_multinode-048185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185 "sudo cat /home/docker/cp-test_multinode-048185-m02_multinode-048185.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp multinode-048185-m02:/home/docker/cp-test.txt multinode-048185-m03:/home/docker/cp-test_multinode-048185-m02_multinode-048185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m03 "sudo cat /home/docker/cp-test_multinode-048185-m02_multinode-048185-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp testdata/cp-test.txt multinode-048185-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp multinode-048185-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2877069304/001/cp-test_multinode-048185-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp multinode-048185-m03:/home/docker/cp-test.txt multinode-048185:/home/docker/cp-test_multinode-048185-m03_multinode-048185.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185 "sudo cat /home/docker/cp-test_multinode-048185-m03_multinode-048185.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 cp multinode-048185-m03:/home/docker/cp-test.txt multinode-048185-m02:/home/docker/cp-test_multinode-048185-m03_multinode-048185-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 ssh -n multinode-048185-m02 "sudo cat /home/docker/cp-test_multinode-048185-m03_multinode-048185-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-048185 node stop m03: (1.323691027s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-048185 status: exit status 7 (571.046362ms)

                                                
                                                
-- stdout --
	multinode-048185
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048185-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048185-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-048185 status --alsologtostderr: exit status 7 (535.98308ms)

                                                
                                                
-- stdout --
	multinode-048185
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048185-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048185-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:39:02.432135 1706574 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:39:02.432358 1706574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:39:02.432371 1706574 out.go:374] Setting ErrFile to fd 2...
	I1123 10:39:02.432376 1706574 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:39:02.432731 1706574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:39:02.433043 1706574 out.go:368] Setting JSON to false
	I1123 10:39:02.433110 1706574 mustload.go:66] Loading cluster: multinode-048185
	I1123 10:39:02.433180 1706574 notify.go:221] Checking for updates...
	I1123 10:39:02.433631 1706574 config.go:182] Loaded profile config "multinode-048185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:39:02.433663 1706574 status.go:174] checking status of multinode-048185 ...
	I1123 10:39:02.434337 1706574 cli_runner.go:164] Run: docker container inspect multinode-048185 --format={{.State.Status}}
	I1123 10:39:02.455485 1706574 status.go:371] multinode-048185 host status = "Running" (err=<nil>)
	I1123 10:39:02.455509 1706574 host.go:66] Checking if "multinode-048185" exists ...
	I1123 10:39:02.455825 1706574 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-048185
	I1123 10:39:02.483246 1706574 host.go:66] Checking if "multinode-048185" exists ...
	I1123 10:39:02.483561 1706574 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:39:02.483619 1706574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-048185
	I1123 10:39:02.508823 1706574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35107 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/multinode-048185/id_rsa Username:docker}
	I1123 10:39:02.612810 1706574 ssh_runner.go:195] Run: systemctl --version
	I1123 10:39:02.619757 1706574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:39:02.632662 1706574 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:39:02.690914 1706574 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 10:39:02.680894265 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:39:02.691622 1706574 kubeconfig.go:125] found "multinode-048185" server: "https://192.168.67.2:8443"
	I1123 10:39:02.691715 1706574 api_server.go:166] Checking apiserver status ...
	I1123 10:39:02.691777 1706574 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 10:39:02.703935 1706574 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1439/cgroup
	I1123 10:39:02.712046 1706574 api_server.go:182] apiserver freezer: "10:freezer:/docker/6a312620c7d0ef5046ef9c2f657005e48b262b2cb4ecd8ab4bcdf9ff2918d696/kubepods/burstable/pod632eed8bc6537628c18a521f0af01540/04dd12368148842711a4c67b098ec5e6b9292225799bff65084ce4b6c6bc80f8"
	I1123 10:39:02.712121 1706574 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6a312620c7d0ef5046ef9c2f657005e48b262b2cb4ecd8ab4bcdf9ff2918d696/kubepods/burstable/pod632eed8bc6537628c18a521f0af01540/04dd12368148842711a4c67b098ec5e6b9292225799bff65084ce4b6c6bc80f8/freezer.state
	I1123 10:39:02.719556 1706574 api_server.go:204] freezer state: "THAWED"
	I1123 10:39:02.719583 1706574 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 10:39:02.729180 1706574 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 10:39:02.729210 1706574 status.go:463] multinode-048185 apiserver status = Running (err=<nil>)
	I1123 10:39:02.729221 1706574 status.go:176] multinode-048185 status: &{Name:multinode-048185 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:39:02.729237 1706574 status.go:174] checking status of multinode-048185-m02 ...
	I1123 10:39:02.729553 1706574 cli_runner.go:164] Run: docker container inspect multinode-048185-m02 --format={{.State.Status}}
	I1123 10:39:02.745393 1706574 status.go:371] multinode-048185-m02 host status = "Running" (err=<nil>)
	I1123 10:39:02.745415 1706574 host.go:66] Checking if "multinode-048185-m02" exists ...
	I1123 10:39:02.745705 1706574 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-048185-m02
	I1123 10:39:02.762101 1706574 host.go:66] Checking if "multinode-048185-m02" exists ...
	I1123 10:39:02.762409 1706574 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 10:39:02.762446 1706574 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-048185-m02
	I1123 10:39:02.779421 1706574 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:35112 SSHKeyPath:/home/jenkins/minikube-integration/21968-1582671/.minikube/machines/multinode-048185-m02/id_rsa Username:docker}
	I1123 10:39:02.880617 1706574 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 10:39:02.893040 1706574 status.go:176] multinode-048185-m02 status: &{Name:multinode-048185-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:39:02.893077 1706574 status.go:174] checking status of multinode-048185-m03 ...
	I1123 10:39:02.893452 1706574 cli_runner.go:164] Run: docker container inspect multinode-048185-m03 --format={{.State.Status}}
	I1123 10:39:02.911165 1706574 status.go:371] multinode-048185-m03 host status = "Stopped" (err=<nil>)
	I1123 10:39:02.911211 1706574 status.go:384] host is not running, skipping remaining checks
	I1123 10:39:02.911219 1706574 status.go:176] multinode-048185-m03 status: &{Name:multinode-048185-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.43s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-048185 node start m03 -v=5 --alsologtostderr: (6.846437497s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (72.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-048185
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-048185
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-048185: (25.095821568s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048185 --wait=true -v=5 --alsologtostderr
E1123 10:40:00.683088 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-048185 --wait=true -v=5 --alsologtostderr: (47.525465115s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-048185
--- PASS: TestMultiNode/serial/RestartKeepsNodes (72.74s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-048185 node delete m03: (4.9788389s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.67s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-048185 stop: (23.934069984s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-048185 status: exit status 7 (89.148213ms)

                                                
                                                
-- stdout --
	multinode-048185
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-048185-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-048185 status --alsologtostderr: exit status 7 (93.08425ms)

                                                
                                                
-- stdout --
	multinode-048185
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-048185-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:40:53.044231 1715314 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:40:53.044394 1715314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:40:53.044408 1715314 out.go:374] Setting ErrFile to fd 2...
	I1123 10:40:53.044421 1715314 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:40:53.044706 1715314 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:40:53.044930 1715314 out.go:368] Setting JSON to false
	I1123 10:40:53.044980 1715314 mustload.go:66] Loading cluster: multinode-048185
	I1123 10:40:53.045067 1715314 notify.go:221] Checking for updates...
	I1123 10:40:53.045457 1715314 config.go:182] Loaded profile config "multinode-048185": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:40:53.045476 1715314 status.go:174] checking status of multinode-048185 ...
	I1123 10:40:53.046054 1715314 cli_runner.go:164] Run: docker container inspect multinode-048185 --format={{.State.Status}}
	I1123 10:40:53.065674 1715314 status.go:371] multinode-048185 host status = "Stopped" (err=<nil>)
	I1123 10:40:53.065697 1715314 status.go:384] host is not running, skipping remaining checks
	I1123 10:40:53.065704 1715314 status.go:176] multinode-048185 status: &{Name:multinode-048185 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 10:40:53.065735 1715314 status.go:174] checking status of multinode-048185-m02 ...
	I1123 10:40:53.066087 1715314 cli_runner.go:164] Run: docker container inspect multinode-048185-m02 --format={{.State.Status}}
	I1123 10:40:53.085221 1715314 status.go:371] multinode-048185-m02 host status = "Stopped" (err=<nil>)
	I1123 10:40:53.085258 1715314 status.go:384] host is not running, skipping remaining checks
	I1123 10:40:53.085275 1715314 status.go:176] multinode-048185-m02 status: &{Name:multinode-048185-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048185 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1123 10:41:23.747715 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-048185 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (47.542630468s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048185 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.25s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-048185
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048185-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-048185-m02 --driver=docker  --container-runtime=containerd: exit status 14 (94.680656ms)

                                                
                                                
-- stdout --
	* [multinode-048185-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-048185-m02' is duplicated with machine name 'multinode-048185-m02' in profile 'multinode-048185'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048185-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-048185-m03 --driver=docker  --container-runtime=containerd: (36.937727003s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-048185
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-048185: exit status 80 (353.913711ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-048185 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-048185-m03 already exists in multinode-048185-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-048185-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-048185-m03: (2.049963477s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.49s)

                                                
                                    
x
+
TestPreload (122.13s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-573050 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E1123 10:42:54.960372 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-573050 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (57.020645594s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-573050 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-573050 image pull gcr.io/k8s-minikube/busybox: (2.266378585s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-573050
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-573050: (5.911257064s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-573050 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-573050 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (54.230805431s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-573050 image list
helpers_test.go:175: Cleaning up "test-preload-573050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-573050
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-573050: (2.446035013s)
--- PASS: TestPreload (122.13s)

                                                
                                    
x
+
TestScheduledStopUnix (107.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-247708 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-247708 --memory=3072 --driver=docker  --container-runtime=containerd: (32.178749039s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-247708 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 10:44:59.355344 1731099 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:44:59.355472 1731099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:44:59.355483 1731099 out.go:374] Setting ErrFile to fd 2...
	I1123 10:44:59.355495 1731099 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:44:59.355846 1731099 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:44:59.356142 1731099 out.go:368] Setting JSON to false
	I1123 10:44:59.356261 1731099 mustload.go:66] Loading cluster: scheduled-stop-247708
	I1123 10:44:59.356946 1731099 config.go:182] Loaded profile config "scheduled-stop-247708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:44:59.357039 1731099 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/config.json ...
	I1123 10:44:59.357237 1731099 mustload.go:66] Loading cluster: scheduled-stop-247708
	I1123 10:44:59.357440 1731099 config.go:182] Loaded profile config "scheduled-stop-247708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-247708 -n scheduled-stop-247708
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-247708 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 10:44:59.853974 1731189 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:44:59.854097 1731189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:44:59.854106 1731189 out.go:374] Setting ErrFile to fd 2...
	I1123 10:44:59.854111 1731189 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:44:59.854370 1731189 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:44:59.854609 1731189 out.go:368] Setting JSON to false
	I1123 10:44:59.854799 1731189 daemonize_unix.go:73] killing process 1731116 as it is an old scheduled stop
	I1123 10:44:59.854963 1731189 mustload.go:66] Loading cluster: scheduled-stop-247708
	I1123 10:44:59.855388 1731189 config.go:182] Loaded profile config "scheduled-stop-247708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:44:59.855459 1731189 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/config.json ...
	I1123 10:44:59.855633 1731189 mustload.go:66] Loading cluster: scheduled-stop-247708
	I1123 10:44:59.855745 1731189 config.go:182] Loaded profile config "scheduled-stop-247708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 10:44:59.863454 1584532 retry.go:31] will retry after 97.479µs: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.864626 1584532 retry.go:31] will retry after 106.2µs: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.868540 1584532 retry.go:31] will retry after 263.666µs: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.869668 1584532 retry.go:31] will retry after 460.069µs: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.870739 1584532 retry.go:31] will retry after 529.322µs: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.871846 1584532 retry.go:31] will retry after 778.093µs: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.872962 1584532 retry.go:31] will retry after 1.318174ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.875147 1584532 retry.go:31] will retry after 1.896603ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.877292 1584532 retry.go:31] will retry after 1.642075ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.879442 1584532 retry.go:31] will retry after 3.099877ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.883637 1584532 retry.go:31] will retry after 4.71224ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.888861 1584532 retry.go:31] will retry after 12.506017ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.902090 1584532 retry.go:31] will retry after 7.743075ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.910320 1584532 retry.go:31] will retry after 27.042727ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.937480 1584532 retry.go:31] will retry after 17.767422ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
I1123 10:44:59.955710 1584532 retry.go:31] will retry after 29.903843ms: open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-247708 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
E1123 10:45:00.692862 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-247708 -n scheduled-stop-247708
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-247708
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-247708 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 10:45:26.061578 1731862 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:45:26.061770 1731862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:45:26.061802 1731862 out.go:374] Setting ErrFile to fd 2...
	I1123 10:45:26.061823 1731862 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:45:26.062120 1731862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:45:26.062429 1731862 out.go:368] Setting JSON to false
	I1123 10:45:26.062573 1731862 mustload.go:66] Loading cluster: scheduled-stop-247708
	I1123 10:45:26.062988 1731862 config.go:182] Loaded profile config "scheduled-stop-247708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:45:26.063109 1731862 profile.go:143] Saving config to /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/scheduled-stop-247708/config.json ...
	I1123 10:45:26.063379 1731862 mustload.go:66] Loading cluster: scheduled-stop-247708
	I1123 10:45:26.063591 1731862 config.go:182] Loaded profile config "scheduled-stop-247708": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-247708
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-247708: exit status 7 (68.223303ms)

                                                
                                                
-- stdout --
	scheduled-stop-247708
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-247708 -n scheduled-stop-247708
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-247708 -n scheduled-stop-247708: exit status 7 (70.541743ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-247708" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-247708
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-247708: (3.853546151s)
--- PASS: TestScheduledStopUnix (107.94s)

                                                
                                    
x
+
TestInsufficientStorage (13.17s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-899183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-899183 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.553190755s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e7765dc0-440a-4a0a-8660-441e57dde06e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-899183] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2d9afa2-654b-4737-b0ae-c82d22fff83e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21968"}}
	{"specversion":"1.0","id":"bd8aef8b-ead2-462e-9cdd-dc2c3baf349f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0cb75a18-40f3-4313-847c-46cd0844dfca","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig"}}
	{"specversion":"1.0","id":"b41ca043-31a1-4014-94bf-126804346ba3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube"}}
	{"specversion":"1.0","id":"2e561fdf-1fd3-4c76-8348-494a54e9eb45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"9263c8ba-c59e-459d-a298-68d5a92892e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"593b5bee-0245-4a35-a202-0b54540e04bd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"928a1b4c-5c42-442e-a787-40fc8a248d15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"93c6be24-edb8-4d28-873f-4ceb66a0b7b7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a7c353cb-ab8c-48bf-99f3-67852a082099","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"895786ba-2011-48aa-af7d-a60f47fa7b9f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-899183\" primary control-plane node in \"insufficient-storage-899183\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce50996e-e2ec-4101-ba24-9de7f69725c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"996e3229-0990-49a0-8b14-da48e859183d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"e2d107f4-3d3a-4baf-badb-5a85d0539b00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-899183 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-899183 --output=json --layout=cluster: exit status 7 (295.197108ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-899183","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-899183","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 10:46:25.893579 1733686 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-899183" does not appear in /home/jenkins/minikube-integration/21968-1582671/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-899183 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-899183 --output=json --layout=cluster: exit status 7 (304.193824ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-899183","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-899183","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 10:46:26.200066 1733754 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-899183" does not appear in /home/jenkins/minikube-integration/21968-1582671/kubeconfig
	E1123 10:46:26.209729 1733754 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/insufficient-storage-899183/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-899183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-899183
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-899183: (2.016734434s)
--- PASS: TestInsufficientStorage (13.17s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (62.93s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3222292075 start -p running-upgrade-514707 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3222292075 start -p running-upgrade-514707 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.891319054s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-514707 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1123 10:50:58.068793 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-514707 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (27.17181341s)
helpers_test.go:175: Cleaning up "running-upgrade-514707" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-514707
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-514707: (2.013799569s)
--- PASS: TestRunningBinaryUpgrade (62.93s)

                                                
                                    
x
+
TestKubernetesUpgrade (348.14s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (34.074311071s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-871841
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-871841: (1.314245158s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-871841 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-871841 status --format={{.Host}}: exit status 7 (97.340319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m53.667642051s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-871841 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (103.201912ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-871841] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-871841
	    minikube start -p kubernetes-upgrade-871841 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8718412 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-871841 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-871841 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (15.972219025s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-871841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-871841
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-871841: (2.808014923s)
--- PASS: TestKubernetesUpgrade (348.14s)

                                                
                                    
x
+
TestMissingContainerUpgrade (160.61s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.2609851173 start -p missing-upgrade-276598 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.2609851173 start -p missing-upgrade-276598 --memory=3072 --driver=docker  --container-runtime=containerd: (58.773807731s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-276598
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-276598
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-276598 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-276598 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m38.399623932s)
helpers_test.go:175: Cleaning up "missing-upgrade-276598" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-276598
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-276598: (1.975982534s)
--- PASS: TestMissingContainerUpgrade (160.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-469143 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-469143 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (101.540925ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-469143] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (46.82s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-469143 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-469143 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (46.349463335s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-469143 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (46.82s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-469143 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-469143 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.437441078s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-469143 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-469143 status -o json: exit status 2 (308.213393ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-469143","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-469143
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-469143: (2.006279109s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-469143 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-469143 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.710265607s)
--- PASS: TestNoKubernetes/serial/Start (7.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21968-1582671/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-469143 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-469143 "sudo systemctl is-active --quiet service kubelet": exit status 1 (369.042491ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.86s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.72s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-469143
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-469143: (2.718485665s)
--- PASS: TestNoKubernetes/serial/Stop (2.72s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.41s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-469143 --driver=docker  --container-runtime=containerd
E1123 10:47:54.960583 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-469143 --driver=docker  --container-runtime=containerd: (7.406265536s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-469143 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-469143 "sudo systemctl is-active --quiet service kubelet": exit status 1 (350.511547ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.35s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (53.95s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3818434373 start -p stopped-upgrade-520182 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3818434373 start -p stopped-upgrade-520182 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.359588556s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3818434373 -p stopped-upgrade-520182 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3818434373 -p stopped-upgrade-520182 stop: (1.315174384s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-520182 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1123 10:50:00.682663 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-520182 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (20.275771227s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (53.95s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-520182
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-520182: (1.390479695s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestPause/serial/Start (50.69s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-886155 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-886155 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (50.687230849s)
--- PASS: TestPause/serial/Start (50.69s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.46s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-886155 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-886155 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.433328496s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.46s)

                                                
                                    
x
+
TestPause/serial/Pause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-886155 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.69s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-886155 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-886155 --output=json --layout=cluster: exit status 2 (324.477477ms)

                                                
                                                
-- stdout --
	{"Name":"pause-886155","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-886155","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.62s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-886155 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.62s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.83s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-886155 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.83s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.76s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-886155 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-886155 --alsologtostderr -v=5: (2.760934388s)
--- PASS: TestPause/serial/DeletePaused (2.76s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.39s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (15.333703881s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-886155
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-886155: exit status 1 (18.535454ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-886155: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-378762 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-378762 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (190.73804ms)

                                                
                                                
-- stdout --
	* [false-378762] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21968
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 10:53:10.259156 1772989 out.go:360] Setting OutFile to fd 1 ...
	I1123 10:53:10.259381 1772989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:53:10.259407 1772989 out.go:374] Setting ErrFile to fd 2...
	I1123 10:53:10.259426 1772989 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 10:53:10.259723 1772989 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21968-1582671/.minikube/bin
	I1123 10:53:10.260164 1772989 out.go:368] Setting JSON to false
	I1123 10:53:10.261115 1772989 start.go:133] hostinfo: {"hostname":"ip-172-31-21-244","uptime":41736,"bootTime":1763853455,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1123 10:53:10.261209 1772989 start.go:143] virtualization:  
	I1123 10:53:10.264961 1772989 out.go:179] * [false-378762] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 10:53:10.268029 1772989 out.go:179]   - MINIKUBE_LOCATION=21968
	I1123 10:53:10.268111 1772989 notify.go:221] Checking for updates...
	I1123 10:53:10.272351 1772989 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 10:53:10.275620 1772989 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21968-1582671/kubeconfig
	I1123 10:53:10.278510 1772989 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21968-1582671/.minikube
	I1123 10:53:10.282075 1772989 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 10:53:10.285353 1772989 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 10:53:10.289841 1772989 config.go:182] Loaded profile config "kubernetes-upgrade-871841": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 10:53:10.289956 1772989 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 10:53:10.312351 1772989 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 10:53:10.312493 1772989 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 10:53:10.378407 1772989 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 10:53:10.368947145 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 10:53:10.378517 1772989 docker.go:319] overlay module found
	I1123 10:53:10.382046 1772989 out.go:179] * Using the docker driver based on user configuration
	I1123 10:53:10.385850 1772989 start.go:309] selected driver: docker
	I1123 10:53:10.385878 1772989 start.go:927] validating driver "docker" against <nil>
	I1123 10:53:10.385892 1772989 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 10:53:10.389672 1772989 out.go:203] 
	W1123 10:53:10.392855 1772989 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1123 10:53:10.396556 1772989 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-378762 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-378762" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:48:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-871841
contexts:
- context:
cluster: kubernetes-upgrade-871841
user: kubernetes-upgrade-871841
name: kubernetes-upgrade-871841
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-871841
user:
client-certificate: /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/kubernetes-upgrade-871841/client.crt
client-key: /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/kubernetes-upgrade-871841/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-378762

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-378762"

                                                
                                                
----------------------- debugLogs end: false-378762 [took: 4.234683373s] --------------------------------
helpers_test.go:175: Cleaning up "false-378762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-378762
--- PASS: TestNetworkPlugins/group/false (4.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (58.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1123 10:55:00.682619 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (58.827604569s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (58.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-162750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-162750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.025093247s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-162750 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-162750 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-162750 --alsologtostderr -v=3: (12.146008246s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162750 -n old-k8s-version-162750
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162750 -n old-k8s-version-162750: exit status 7 (72.664691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-162750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (55.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-162750 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (55.079995721s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-162750 -n old-k8s-version-162750
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (55.48s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mw6zb" [51d5cb29-97bd-4f9e-b06c-4ebef77f7ea2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003146148s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-mw6zb" [51d5cb29-97bd-4f9e-b06c-4ebef77f7ea2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004767087s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-162750 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-162750 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-162750 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162750 -n old-k8s-version-162750
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162750 -n old-k8s-version-162750: exit status 2 (337.296589ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-162750 -n old-k8s-version-162750
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-162750 -n old-k8s-version-162750: exit status 2 (356.55111ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-162750 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-162750 -n old-k8s-version-162750
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-162750 -n old-k8s-version-162750
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (71.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m11.255858593s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (71.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1123 10:57:54.960577 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 10:58:03.750064 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m26.872756026s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-055571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-055571 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.026038697s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-055571 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-055571 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-055571 --alsologtostderr -v=3: (12.125727216s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-055571 -n no-preload-055571
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-055571 -n no-preload-055571: exit status 7 (72.790864ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-055571 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (51.43s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-055571 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (50.9940288s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-055571 -n no-preload-055571
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (51.43s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-969029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-969029 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.377308779s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-969029 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.58s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-969029 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-969029 --alsologtostderr -v=3: (12.394876001s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-969029 -n embed-certs-969029
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-969029 -n embed-certs-969029: exit status 7 (73.721613ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-969029 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (53.84s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-969029 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.361415424s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-969029 -n embed-certs-969029
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (53.84s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9mkmh" [b08dac5e-b921-4c6d-8b6c-0bc3284eb479] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.016279381s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-9mkmh" [b08dac5e-b921-4c6d-8b6c-0bc3284eb479] Running
E1123 11:00:00.682668 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005901525s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-055571 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-055571 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-055571 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-055571 -n no-preload-055571
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-055571 -n no-preload-055571: exit status 2 (342.911018ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-055571 -n no-preload-055571
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-055571 -n no-preload-055571: exit status 2 (331.432196ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-055571 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-055571 -n no-preload-055571
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-055571 -n no-preload-055571
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-071466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-071466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m23.400037423s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (83.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h294h" [ca672814-739d-44ef-8d2c-7ef08642a1a5] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003599853s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-h294h" [ca672814-739d-44ef-8d2c-7ef08642a1a5] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004251376s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-969029 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.17s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-969029 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-969029 --alsologtostderr -v=1
E1123 11:00:44.748564 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:00:44.755055 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:00:44.766377 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:00:44.787791 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-969029 --alsologtostderr -v=1: (1.037512384s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-969029 -n embed-certs-969029
E1123 11:00:44.829670 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:00:44.911435 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:00:45.074163 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-969029 -n embed-certs-969029: exit status 2 (487.79065ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-969029 -n embed-certs-969029
E1123 11:00:45.396039 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-969029 -n embed-certs-969029: exit status 2 (381.425873ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-969029 --alsologtostderr -v=1
E1123 11:00:46.038287 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-969029 -n embed-certs-969029
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-969029 -n embed-certs-969029
E1123 11:00:47.323082 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (39.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-268828 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1123 11:00:55.007501 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:01:05.249749 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:01:25.731694 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-268828 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (39.629880299s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (39.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-268828 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-268828 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-268828 --alsologtostderr -v=3: (1.341057416s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-268828 -n newest-cni-268828
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-268828 -n newest-cni-268828: exit status 7 (68.178265ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-268828 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-268828 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-268828 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (16.522576793s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-268828 -n newest-cni-268828
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-268828 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-268828 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-268828 -n newest-cni-268828
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-268828 -n newest-cni-268828: exit status 2 (540.45589ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-268828 -n newest-cni-268828
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-268828 -n newest-cni-268828: exit status 2 (436.458891ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-268828 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-268828 -n newest-cni-268828
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-268828 -n newest-cni-268828
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.91s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-071466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-071466 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.745760801s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-071466 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.91s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.49s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-071466 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-071466 --alsologtostderr -v=3: (12.486752862s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (89.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
E1123 11:02:06.693482 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m29.173328666s)
--- PASS: TestNetworkPlugins/group/auto/Start (89.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466: exit status 7 (69.795517ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-071466 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-071466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1123 11:02:54.960159 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-071466 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (54.631118083s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cbgxh" [b22d665d-2973-44f6-8881-d28f37d27fae] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002947808s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cbgxh" [b22d665d-2973-44f6-8881-d28f37d27fae] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006786804s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-071466 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-071466 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-071466 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466: exit status 2 (355.24528ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466: exit status 2 (342.795707ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-071466 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-071466 -n default-k8s-diff-port-071466
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.16s)
E1123 11:08:47.864757 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/auto-378762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (85.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m25.553908585s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (85.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-378762 "pgrep -a kubelet"
I1123 11:03:26.994366 1584532 config.go:182] Loaded profile config "auto-378762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-378762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7h5tq" [403bad7c-4436-41ad-9275-c21f1e79ff8e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1123 11:03:28.615404 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-7h5tq" [403bad7c-4436-41ad-9275-c21f1e79ff8e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.003707762s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-378762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E1123 11:03:35.990542 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:03:35.997262 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:03:36.008611 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (60.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1123 11:04:16.965948 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m0.53107022s)
--- PASS: TestNetworkPlugins/group/calico/Start (60.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-rjzml" [5c5de2eb-d723-4eac-9b35-6d89b4ea6388] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003773324s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-378762 "pgrep -a kubelet"
I1123 11:04:52.459623 1584532 config.go:182] Loaded profile config "kindnet-378762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-378762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-r7rlw" [ef200bb4-c46e-454c-927f-0675d6b1a4c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-r7rlw" [ef200bb4-c46e-454c-927f-0675d6b1a4c6] Running
E1123 11:04:57.928163 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:05:00.682598 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/functional-531629/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004065323s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-378762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-6hsrh" [b2c30695-a2ed-4572-a285-c5f55149c5e6] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-6hsrh" [b2c30695-a2ed-4572-a285-c5f55149c5e6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00405469s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-378762 "pgrep -a kubelet"
I1123 11:05:09.390626 1584532 config.go:182] Loaded profile config "calico-378762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-378762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hxwqb" [4cebd407-8dab-418b-ae7a-aefdb1381c9d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hxwqb" [4cebd407-8dab-418b-ae7a-aefdb1381c9d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.005209598s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-378762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (65.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m5.860942632s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (65.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (72.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1123 11:06:12.456771 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/old-k8s-version-162750/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:19.849541 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/no-preload-055571/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m12.52927027s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (72.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-378762 "pgrep -a kubelet"
I1123 11:06:31.047836 1584532 config.go:182] Loaded profile config "custom-flannel-378762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-378762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zpfdb" [7496d2d6-6447-4630-82dd-ca777b5cda8c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zpfdb" [7496d2d6-6447-4630-82dd-ca777b5cda8c] Running
E1123 11:06:36.884687 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:36.891034 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:36.902579 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:36.923970 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:36.965436 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:37.046881 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:37.208432 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:37.530033 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:38.172682 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:06:39.454901 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004237874s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-378762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-378762 "pgrep -a kubelet"
I1123 11:06:58.711868 1584532 config.go:182] Loaded profile config "enable-default-cni-378762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-378762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xlqks" [5dae9f59-d42a-4596-b0d5-72b4b4ae970c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xlqks" [5dae9f59-d42a-4596-b0d5-72b4b4ae970c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.012800892s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m0.745051526s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-378762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (76.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1123 11:07:38.070464 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:07:54.961066 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/addons-966210/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 11:07:58.824871 1584532 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/default-k8s-diff-port-071466/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-378762 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m16.080713944s)
--- PASS: TestNetworkPlugins/group/bridge/Start (76.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-g5mp2" [8dc1380f-7676-45f9-9728-260995f7b26f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004083783s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-378762 "pgrep -a kubelet"
I1123 11:08:10.481018 1584532 config.go:182] Loaded profile config "flannel-378762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-378762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dvvhw" [d8d2909a-01f7-42d6-ad15-ac708b60fada] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dvvhw" [d8d2909a-01f7-42d6-ad15-ac708b60fada] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003454367s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-378762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-378762 "pgrep -a kubelet"
I1123 11:08:51.373969 1584532 config.go:182] Loaded profile config "bridge-378762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-378762 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-w2jdz" [ae5c20ff-0e8a-41f4-9fd7-0c32e7e5fc7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-w2jdz" [ae5c20ff-0e8a-41f4-9fd7-0c32e7e5fc7f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.00398683s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-378762 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-378762 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.29s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.72s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-864587 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-864587" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-864587
--- SKIP: TestDownloadOnlyKic (0.72s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-436374" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-436374
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-378762 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-378762" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:48:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-871841
contexts:
- context:
cluster: kubernetes-upgrade-871841
user: kubernetes-upgrade-871841
name: kubernetes-upgrade-871841
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-871841
user:
client-certificate: /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/kubernetes-upgrade-871841/client.crt
client-key: /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/kubernetes-upgrade-871841/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-378762

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-378762"

                                                
                                                
----------------------- debugLogs end: kubenet-378762 [took: 4.004086359s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-378762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-378762
--- SKIP: TestNetworkPlugins/group/kubenet (4.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-378762 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-378762" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21968-1582671/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 10:48:52 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-871841
contexts:
- context:
cluster: kubernetes-upgrade-871841
user: kubernetes-upgrade-871841
name: kubernetes-upgrade-871841
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-871841
user:
client-certificate: /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/kubernetes-upgrade-871841/client.crt
client-key: /home/jenkins/minikube-integration/21968-1582671/.minikube/profiles/kubernetes-upgrade-871841/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-378762

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-378762" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-378762"

                                                
                                                
----------------------- debugLogs end: cilium-378762 [took: 5.126884103s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-378762" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-378762
--- SKIP: TestNetworkPlugins/group/cilium (5.39s)

                                                
                                    
Copied to clipboard