Test Report: Docker_Linux_containerd_arm64 21969

                    
                      ab0a8cfdd326918695f502976b3bdb249954a688:2025-11-23:42465
                    
                

Test fail (4/333)

Order failed test Duration
301 TestStartStop/group/old-k8s-version/serial/DeployApp 13.9
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 15.31
315 TestStartStop/group/embed-certs/serial/DeployApp 14.68
341 TestStartStop/group/no-preload/serial/DeployApp 15.94
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (13.9s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-132097 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ee3865ff-dc6d-4911-94c7-09b6024edb7c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ee3865ff-dc6d-4911-94c7-09b6024edb7c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003250397s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-132097 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-132097
helpers_test.go:243: (dbg) docker inspect old-k8s-version-132097:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef",
	        "Created": "2025-11-23T08:57:04.667839157Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205947,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:57:04.769159087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef/hosts",
	        "LogPath": "/var/lib/docker/containers/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef-json.log",
	        "Name": "/old-k8s-version-132097",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-132097:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-132097",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef",
	                "LowerDir": "/var/lib/docker/overlay2/ec3c232564eb8b8c04270c5b0c95eedb013a5868deed42f0509c302335a2d989-init/diff:/var/lib/docker/overlay2/e1de88c117c0c773e1fa636243190fd97eadaa5a8e1ee08fd53827cbac767d35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3c232564eb8b8c04270c5b0c95eedb013a5868deed42f0509c302335a2d989/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3c232564eb8b8c04270c5b0c95eedb013a5868deed42f0509c302335a2d989/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3c232564eb8b8c04270c5b0c95eedb013a5868deed42f0509c302335a2d989/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-132097",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-132097/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-132097",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-132097",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-132097",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c55dbb2ab6b018b92d4ec3c5691fa02993e68f8d136bf1df6a3c7e37ab8808",
	            "SandboxKey": "/var/run/docker/netns/58c55dbb2ab6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-132097": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:9b:d3:73:5f:30",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "109fa1b68d0f825ac67c625cd8049aeedd7e3d80891821156d3bdfaf1d82aaa5",
	                    "EndpointID": "c9cddc19ad17439228c12a59f80e1b67ed89c3737c3d6b86f08ac0b30fc26527",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-132097",
	                        "4d0452bb4c92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-132097 -n old-k8s-version-132097
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-132097 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-132097 logs -n 25: (1.221762453s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-694698 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo containerd config dump                                                                                                                                                                                                        │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo crio config                                                                                                                                                                                                                   │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p cilium-694698                                                                                                                                                                                                                                    │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p force-systemd-env-023309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-023309  │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p kubernetes-upgrade-291582                                                                                                                                                                                                                        │ kubernetes-upgrade-291582 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-918102    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ force-systemd-env-023309 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-023309  │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p force-systemd-env-023309                                                                                                                                                                                                                         │ force-systemd-env-023309  │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-options-886452 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-886452       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ cert-options-886452 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-886452       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ -p cert-options-886452 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-886452       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-options-886452                                                                                                                                                                                                                              │ cert-options-886452       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097    │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:56:58
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:56:58.075716  205557 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:56:58.075955  205557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:56:58.075994  205557 out.go:374] Setting ErrFile to fd 2...
	I1123 08:56:58.076024  205557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:56:58.076443  205557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:56:58.077075  205557 out.go:368] Setting JSON to false
	I1123 08:56:58.078143  205557 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5970,"bootTime":1763882248,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:56:58.078269  205557 start.go:143] virtualization:  
	I1123 08:56:58.082214  205557 out.go:179] * [old-k8s-version-132097] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:56:58.087005  205557 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:56:58.087084  205557 notify.go:221] Checking for updates...
	I1123 08:56:58.094076  205557 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:56:58.097487  205557 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:56:58.100734  205557 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:56:58.103928  205557 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:56:58.107100  205557 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:56:58.110841  205557 config.go:182] Loaded profile config "cert-expiration-918102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:56:58.111023  205557 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:56:58.149865  205557 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:56:58.150008  205557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:56:58.209508  205557 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:56:58.200005902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:56:58.209615  205557 docker.go:319] overlay module found
	I1123 08:56:58.212913  205557 out.go:179] * Using the docker driver based on user configuration
	I1123 08:56:58.216078  205557 start.go:309] selected driver: docker
	I1123 08:56:58.216104  205557 start.go:927] validating driver "docker" against <nil>
	I1123 08:56:58.216119  205557 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:56:58.216866  205557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:56:58.280338  205557 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:56:58.270474537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:56:58.280506  205557 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:56:58.280724  205557 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:56:58.283780  205557 out.go:179] * Using Docker driver with root privileges
	I1123 08:56:58.286736  205557 cni.go:84] Creating CNI manager for ""
	I1123 08:56:58.286810  205557 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:56:58.286826  205557 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:56:58.286923  205557 start.go:353] cluster config:
	{Name:old-k8s-version-132097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:56:58.292003  205557 out.go:179] * Starting "old-k8s-version-132097" primary control-plane node in "old-k8s-version-132097" cluster
	I1123 08:56:58.294873  205557 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:56:58.297758  205557 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:56:58.300686  205557 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:56:58.300734  205557 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 08:56:58.300747  205557 cache.go:65] Caching tarball of preloaded images
	I1123 08:56:58.300769  205557 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:56:58.300828  205557 preload.go:238] Found /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:56:58.300838  205557 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:56:58.300946  205557 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/config.json ...
	I1123 08:56:58.300964  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/config.json: {Name:mk1988d6b954c625d3bd1df0ce00c5571f04128f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:58.320967  205557 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:56:58.320991  205557 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:56:58.321007  205557 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:56:58.321037  205557 start.go:360] acquireMachinesLock for old-k8s-version-132097: {Name:mk569d745a741486fc2918f879c45baa624a6ce4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:56:58.321139  205557 start.go:364] duration metric: took 82.184µs to acquireMachinesLock for "old-k8s-version-132097"
	I1123 08:56:58.321169  205557 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-132097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:56:58.321245  205557 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:56:58.324673  205557 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:56:58.324908  205557 start.go:159] libmachine.API.Create for "old-k8s-version-132097" (driver="docker")
	I1123 08:56:58.324945  205557 client.go:173] LocalClient.Create starting
	I1123 08:56:58.325029  205557 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem
	I1123 08:56:58.325064  205557 main.go:143] libmachine: Decoding PEM data...
	I1123 08:56:58.325084  205557 main.go:143] libmachine: Parsing certificate...
	I1123 08:56:58.325137  205557 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem
	I1123 08:56:58.325165  205557 main.go:143] libmachine: Decoding PEM data...
	I1123 08:56:58.325181  205557 main.go:143] libmachine: Parsing certificate...
	I1123 08:56:58.325539  205557 cli_runner.go:164] Run: docker network inspect old-k8s-version-132097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:56:58.341601  205557 cli_runner.go:211] docker network inspect old-k8s-version-132097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:56:58.341689  205557 network_create.go:284] running [docker network inspect old-k8s-version-132097] to gather additional debugging logs...
	I1123 08:56:58.341707  205557 cli_runner.go:164] Run: docker network inspect old-k8s-version-132097
	W1123 08:56:58.358311  205557 cli_runner.go:211] docker network inspect old-k8s-version-132097 returned with exit code 1
	I1123 08:56:58.358345  205557 network_create.go:287] error running [docker network inspect old-k8s-version-132097]: docker network inspect old-k8s-version-132097: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-132097 not found
	I1123 08:56:58.358358  205557 network_create.go:289] output of [docker network inspect old-k8s-version-132097]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-132097 not found
	
	** /stderr **
	I1123 08:56:58.358575  205557 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:56:58.374942  205557 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ab12b2c3b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:c9:6d:7b:80:76} reservation:<nil>}
	I1123 08:56:58.375286  205557 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7f5e4a52a57c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:1a:79:b2:02:66} reservation:<nil>}
	I1123 08:56:58.375689  205557 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed031858d624 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:47:7d:04:56:4a} reservation:<nil>}
	I1123 08:56:58.375909  205557 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7b189b3c67c1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:f6:56:52:0a:44:1f} reservation:<nil>}
	I1123 08:56:58.376301  205557 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0b0e0}
	I1123 08:56:58.376319  205557 network_create.go:124] attempt to create docker network old-k8s-version-132097 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:56:58.376383  205557 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-132097 old-k8s-version-132097
	I1123 08:56:58.438591  205557 network_create.go:108] docker network old-k8s-version-132097 192.168.85.0/24 created
	I1123 08:56:58.438623  205557 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-132097" container
	I1123 08:56:58.438715  205557 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:56:58.457835  205557 cli_runner.go:164] Run: docker volume create old-k8s-version-132097 --label name.minikube.sigs.k8s.io=old-k8s-version-132097 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:56:58.476578  205557 oci.go:103] Successfully created a docker volume old-k8s-version-132097
	I1123 08:56:58.476673  205557 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-132097-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-132097 --entrypoint /usr/bin/test -v old-k8s-version-132097:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:56:58.996286  205557 oci.go:107] Successfully prepared a docker volume old-k8s-version-132097
	I1123 08:56:58.996363  205557 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:56:58.996381  205557 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:56:58.996448  205557 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-132097:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:57:04.592308  205557 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-132097:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.595821587s)
	I1123 08:57:04.592349  205557 kic.go:203] duration metric: took 5.595965588s to extract preloaded images to volume ...
	W1123 08:57:04.592490  205557 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:57:04.592606  205557 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:57:04.651143  205557 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-132097 --name old-k8s-version-132097 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-132097 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-132097 --network old-k8s-version-132097 --ip 192.168.85.2 --volume old-k8s-version-132097:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:57:05.027333  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Running}}
	I1123 08:57:05.053202  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:05.077042  205557 cli_runner.go:164] Run: docker exec old-k8s-version-132097 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:57:05.138471  205557 oci.go:144] the created container "old-k8s-version-132097" has a running status.
	I1123 08:57:05.138504  205557 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa...
	I1123 08:57:05.830255  205557 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:57:05.850811  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:05.869025  205557 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:57:05.869046  205557 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-132097 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:57:05.911771  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:05.939737  205557 machine.go:94] provisionDockerMachine start ...
	I1123 08:57:05.939829  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:05.957917  205557 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:05.958259  205557 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1123 08:57:05.958282  205557 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:57:05.958921  205557 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:57:09.115043  205557 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-132097
	
	I1123 08:57:09.115066  205557 ubuntu.go:182] provisioning hostname "old-k8s-version-132097"
	I1123 08:57:09.115142  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:09.132902  205557 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:09.133227  205557 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1123 08:57:09.133247  205557 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-132097 && echo "old-k8s-version-132097" | sudo tee /etc/hostname
	I1123 08:57:09.292790  205557 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-132097
	
	I1123 08:57:09.292878  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:09.311023  205557 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:09.311437  205557 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1123 08:57:09.311458  205557 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-132097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-132097/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-132097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:57:09.463645  205557 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:57:09.463673  205557 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:57:09.463693  205557 ubuntu.go:190] setting up certificates
	I1123 08:57:09.463703  205557 provision.go:84] configureAuth start
	I1123 08:57:09.463774  205557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132097
	I1123 08:57:09.490883  205557 provision.go:143] copyHostCerts
	I1123 08:57:09.490965  205557 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:57:09.490980  205557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:57:09.491064  205557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:57:09.491170  205557 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:57:09.491181  205557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:57:09.491218  205557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:57:09.491292  205557 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:57:09.491301  205557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:57:09.491332  205557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:57:09.491424  205557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-132097 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-132097]
	I1123 08:57:09.995296  205557 provision.go:177] copyRemoteCerts
	I1123 08:57:09.995378  205557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:57:09.995425  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.022197  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.131600  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:57:10.151668  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:57:10.170212  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:57:10.193623  205557 provision.go:87] duration metric: took 729.898059ms to configureAuth
	I1123 08:57:10.193653  205557 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:57:10.193893  205557 config.go:182] Loaded profile config "old-k8s-version-132097": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:57:10.193908  205557 machine.go:97] duration metric: took 4.254150104s to provisionDockerMachine
	I1123 08:57:10.193917  205557 client.go:176] duration metric: took 11.868961603s to LocalClient.Create
	I1123 08:57:10.193936  205557 start.go:167] duration metric: took 11.869028862s to libmachine.API.Create "old-k8s-version-132097"
	I1123 08:57:10.193949  205557 start.go:293] postStartSetup for "old-k8s-version-132097" (driver="docker")
	I1123 08:57:10.193959  205557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:57:10.194028  205557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:57:10.194072  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.214297  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.319610  205557 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:57:10.322901  205557 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:57:10.322933  205557 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:57:10.322946  205557 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:57:10.323005  205557 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:57:10.323088  205557 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:57:10.323198  205557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:57:10.331074  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:57:10.349627  205557 start.go:296] duration metric: took 155.662635ms for postStartSetup
	I1123 08:57:10.350015  205557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132097
	I1123 08:57:10.367315  205557 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/config.json ...
	I1123 08:57:10.367640  205557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:57:10.367695  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.384602  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.488274  205557 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:57:10.493105  205557 start.go:128] duration metric: took 12.171844318s to createHost
	I1123 08:57:10.493131  205557 start.go:83] releasing machines lock for "old-k8s-version-132097", held for 12.171978088s
	I1123 08:57:10.493204  205557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132097
	I1123 08:57:10.510164  205557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:57:10.510257  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.510172  205557 ssh_runner.go:195] Run: cat /version.json
	I1123 08:57:10.510356  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.526416  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.545392  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.718091  205557 ssh_runner.go:195] Run: systemctl --version
	I1123 08:57:10.724586  205557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:57:10.729451  205557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:57:10.729547  205557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:57:10.757493  205557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:57:10.757566  205557 start.go:496] detecting cgroup driver to use...
	I1123 08:57:10.757615  205557 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:57:10.757691  205557 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:57:10.772292  205557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:57:10.785621  205557 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:57:10.785731  205557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:57:10.802884  205557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:57:10.823678  205557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:57:10.945776  205557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:57:11.077075  205557 docker.go:234] disabling docker service ...
	I1123 08:57:11.077193  205557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:57:11.100476  205557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:57:11.114000  205557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:57:11.241484  205557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:57:11.359238  205557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:57:11.371792  205557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:57:11.385313  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1123 08:57:11.394176  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:57:11.403118  205557 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:57:11.403204  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:57:11.411891  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:57:11.420403  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:57:11.429844  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:57:11.438472  205557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:57:11.448237  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:57:11.457958  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:57:11.467751  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:57:11.477714  205557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:57:11.485453  205557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:57:11.492795  205557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:57:11.619250  205557 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:57:11.753073  205557 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:57:11.753220  205557 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:57:11.757105  205557 start.go:564] Will wait 60s for crictl version
	I1123 08:57:11.757216  205557 ssh_runner.go:195] Run: which crictl
	I1123 08:57:11.760762  205557 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:57:11.786059  205557 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:57:11.786175  205557 ssh_runner.go:195] Run: containerd --version
	I1123 08:57:11.808380  205557 ssh_runner.go:195] Run: containerd --version
	I1123 08:57:11.834126  205557 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1123 08:57:11.837087  205557 cli_runner.go:164] Run: docker network inspect old-k8s-version-132097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:57:11.854569  205557 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:57:11.858321  205557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:57:11.868036  205557 kubeadm.go:884] updating cluster {Name:old-k8s-version-132097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:57:11.868180  205557 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:57:11.868250  205557 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:57:11.892439  205557 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:57:11.892464  205557 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:57:11.892528  205557 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:57:11.916323  205557 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:57:11.916348  205557 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:57:11.916357  205557 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1123 08:57:11.916452  205557 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-132097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:57:11.916521  205557 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:57:11.955629  205557 cni.go:84] Creating CNI manager for ""
	I1123 08:57:11.955653  205557 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:57:11.955672  205557 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:57:11.955696  205557 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-132097 NodeName:old-k8s-version-132097 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:57:11.955852  205557 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-132097"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:57:11.955924  205557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:57:11.964586  205557 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:57:11.964655  205557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:57:11.972851  205557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1123 08:57:11.985890  205557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:57:11.999580  205557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1123 08:57:12.016007  205557 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:57:12.021860  205557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:57:12.035856  205557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:57:12.145204  205557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:57:12.162111  205557 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097 for IP: 192.168.85.2
	I1123 08:57:12.162136  205557 certs.go:195] generating shared ca certs ...
	I1123 08:57:12.162153  205557 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.162293  205557 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:57:12.162343  205557 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:57:12.162358  205557 certs.go:257] generating profile certs ...
	I1123 08:57:12.162414  205557 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.key
	I1123 08:57:12.162430  205557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt with IP's: []
	I1123 08:57:12.346401  205557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt ...
	I1123 08:57:12.346432  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: {Name:mke988f2355e47aa3b3cecde8bcb924023bd7a1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.346632  205557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.key ...
	I1123 08:57:12.346659  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.key: {Name:mka607eb42432889fa6550a717949c1750577787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.346751  205557 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key.82dac9d1
	I1123 08:57:12.346774  205557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt.82dac9d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:57:12.400945  205557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt.82dac9d1 ...
	I1123 08:57:12.400971  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt.82dac9d1: {Name:mk94df40e4fc2d589b542121aa0a3b7e606816f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.401142  205557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key.82dac9d1 ...
	I1123 08:57:12.401155  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key.82dac9d1: {Name:mkfa0fabc149dbc3e492e0dda94c640912f6ea5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.401245  205557 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt.82dac9d1 -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt
	I1123 08:57:12.401335  205557 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key.82dac9d1 -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key
	I1123 08:57:12.401397  205557 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.key
	I1123 08:57:12.401417  205557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.crt with IP's: []
	I1123 08:57:12.542191  205557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.crt ...
	I1123 08:57:12.542223  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.crt: {Name:mk38a6f6cc975b1ab50cc4eb53e87eb31af36277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.542397  205557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.key ...
	I1123 08:57:12.542422  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.key: {Name:mkc1a85f87bbbc56731c9b0fb3a53076a0b001d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.542626  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:57:12.542674  205557 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:57:12.542689  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:57:12.542728  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:57:12.542758  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:57:12.542785  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:57:12.542835  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:57:12.543488  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:57:12.562736  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:57:12.581539  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:57:12.600098  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:57:12.618329  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:57:12.636558  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:57:12.654553  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:57:12.671996  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:57:12.689823  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:57:12.708777  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:57:12.726519  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:57:12.745649  205557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:57:12.758922  205557 ssh_runner.go:195] Run: openssl version
	I1123 08:57:12.766336  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:57:12.775543  205557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:57:12.779107  205557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:57:12.779170  205557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:57:12.822771  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:57:12.831257  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:57:12.839690  205557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:57:12.843488  205557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:57:12.843554  205557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:57:12.886834  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:57:12.895581  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:57:12.903817  205557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:57:12.911417  205557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:57:12.911536  205557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:57:12.969951  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:57:12.979126  205557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:57:12.983876  205557 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:57:12.983991  205557 kubeadm.go:401] StartCluster: {Name:old-k8s-version-132097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:57:12.984075  205557 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:57:12.984170  205557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:57:13.014456  205557 cri.go:89] found id: ""
	I1123 08:57:13.014564  205557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:57:13.022498  205557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:57:13.030730  205557 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:57:13.030849  205557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:57:13.039816  205557 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:57:13.039844  205557 kubeadm.go:158] found existing configuration files:
	
	I1123 08:57:13.039921  205557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:57:13.048545  205557 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:57:13.048634  205557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:57:13.057763  205557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:57:13.067097  205557 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:57:13.067210  205557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:57:13.075916  205557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:57:13.084413  205557 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:57:13.084486  205557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:57:13.092014  205557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:57:13.100047  205557 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:57:13.100161  205557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:57:13.107516  205557 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:57:13.192784  205557 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:57:13.276045  205557 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:57:28.892126  205557 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1123 08:57:28.892182  205557 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:57:28.892271  205557 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:57:28.892326  205557 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:57:28.892360  205557 kubeadm.go:319] OS: Linux
	I1123 08:57:28.892405  205557 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:57:28.892453  205557 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:57:28.892514  205557 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:57:28.892563  205557 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:57:28.892610  205557 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:57:28.892658  205557 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:57:28.892703  205557 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:57:28.892750  205557 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:57:28.892804  205557 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:57:28.892880  205557 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:57:28.892975  205557 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:57:28.893066  205557 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1123 08:57:28.893128  205557 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:57:28.896176  205557 out.go:252]   - Generating certificates and keys ...
	I1123 08:57:28.896279  205557 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:57:28.896359  205557 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:57:28.896428  205557 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:57:28.896485  205557 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:57:28.896550  205557 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:57:28.896612  205557 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:57:28.896674  205557 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:57:28.896842  205557 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-132097] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:57:28.896918  205557 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:57:28.897064  205557 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-132097] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:57:28.897134  205557 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:57:28.897198  205557 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:57:28.897245  205557 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:57:28.897300  205557 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:57:28.897351  205557 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:57:28.897404  205557 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:57:28.897467  205557 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:57:28.897521  205557 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:57:28.897602  205557 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:57:28.897668  205557 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:57:28.900674  205557 out.go:252]   - Booting up control plane ...
	I1123 08:57:28.900844  205557 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:57:28.900966  205557 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:57:28.901088  205557 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:57:28.901238  205557 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:57:28.901375  205557 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:57:28.901448  205557 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:57:28.901650  205557 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 08:57:28.901770  205557 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.001968 seconds
	I1123 08:57:28.901932  205557 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:57:28.902114  205557 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:57:28.902215  205557 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:57:28.902423  205557 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-132097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:57:28.902483  205557 kubeadm.go:319] [bootstrap-token] Using token: 7z9j2b.hjlhuwa1mzqkz0w6
	I1123 08:57:28.905268  205557 out.go:252]   - Configuring RBAC rules ...
	I1123 08:57:28.905378  205557 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:57:28.905539  205557 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:57:28.905717  205557 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:57:28.905895  205557 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:57:28.906047  205557 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:57:28.906178  205557 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:57:28.906334  205557 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:57:28.906402  205557 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:57:28.906484  205557 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:57:28.906511  205557 kubeadm.go:319] 
	I1123 08:57:28.906602  205557 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:57:28.906628  205557 kubeadm.go:319] 
	I1123 08:57:28.906742  205557 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:57:28.906770  205557 kubeadm.go:319] 
	I1123 08:57:28.906817  205557 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:57:28.906917  205557 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:57:28.907030  205557 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:57:28.907060  205557 kubeadm.go:319] 
	I1123 08:57:28.907136  205557 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:57:28.907166  205557 kubeadm.go:319] 
	I1123 08:57:28.907236  205557 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:57:28.907272  205557 kubeadm.go:319] 
	I1123 08:57:28.907367  205557 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:57:28.907472  205557 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:57:28.907569  205557 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:57:28.907593  205557 kubeadm.go:319] 
	I1123 08:57:28.907711  205557 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:57:28.907823  205557 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:57:28.907850  205557 kubeadm.go:319] 
	I1123 08:57:28.907971  205557 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7z9j2b.hjlhuwa1mzqkz0w6 \
	I1123 08:57:28.908112  205557 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 08:57:28.908155  205557 kubeadm.go:319] 	--control-plane 
	I1123 08:57:28.908180  205557 kubeadm.go:319] 
	I1123 08:57:28.908366  205557 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:57:28.908403  205557 kubeadm.go:319] 
	I1123 08:57:28.908523  205557 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7z9j2b.hjlhuwa1mzqkz0w6 \
	I1123 08:57:28.908684  205557 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 08:57:28.908716  205557 cni.go:84] Creating CNI manager for ""
	I1123 08:57:28.908740  205557 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:57:28.911862  205557 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:57:28.914765  205557 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:57:28.919219  205557 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 08:57:28.919293  205557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:57:28.961110  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:57:30.161029  205557 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.199828121s)
	I1123 08:57:30.161135  205557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:57:30.161234  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:30.161393  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-132097 minikube.k8s.io/updated_at=2025_11_23T08_57_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=old-k8s-version-132097 minikube.k8s.io/primary=true
	I1123 08:57:30.356796  205557 ops.go:34] apiserver oom_adj: -16
	I1123 08:57:30.356993  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:30.857895  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:31.357933  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:31.857379  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:32.357617  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:32.857095  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:33.357190  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:33.857128  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:34.357322  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:34.857082  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:35.357894  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:35.857960  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:36.357415  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:36.857626  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:37.357099  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:37.857319  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:38.357164  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:38.857866  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:39.357380  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:39.857673  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:40.357901  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:40.857934  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:41.357100  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:41.514583  205557 kubeadm.go:1114] duration metric: took 11.353416765s to wait for elevateKubeSystemPrivileges
	I1123 08:57:41.514628  205557 kubeadm.go:403] duration metric: took 28.530663979s to StartCluster
	I1123 08:57:41.514645  205557 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:41.514710  205557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:57:41.515709  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:41.515940  205557 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:57:41.516056  205557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:57:41.516313  205557 config.go:182] Loaded profile config "old-k8s-version-132097": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:57:41.516354  205557 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:57:41.516417  205557 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-132097"
	I1123 08:57:41.516438  205557 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-132097"
	I1123 08:57:41.516465  205557 host.go:66] Checking if "old-k8s-version-132097" exists ...
	I1123 08:57:41.516971  205557 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-132097"
	I1123 08:57:41.516998  205557 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-132097"
	I1123 08:57:41.517323  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:41.517601  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:41.520451  205557 out.go:179] * Verifying Kubernetes components...
	I1123 08:57:41.524128  205557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:57:41.560944  205557 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-132097"
	I1123 08:57:41.560983  205557 host.go:66] Checking if "old-k8s-version-132097" exists ...
	I1123 08:57:41.561527  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:41.575551  205557 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:57:41.579480  205557 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:57:41.579503  205557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:57:41.579570  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:41.597748  205557 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:57:41.597769  205557 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:57:41.597833  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:41.630287  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:41.631253  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:41.872614  205557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:57:41.889603  205557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:57:41.889795  205557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:57:41.890281  205557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:57:42.772391  205557 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-132097" to be "Ready" ...
	I1123 08:57:42.772688  205557 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:57:43.203763  205557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.313415514s)
	I1123 08:57:43.206929  205557 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 08:57:43.209915  205557 addons.go:530] duration metric: took 1.69355008s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 08:57:43.277106  205557 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-132097" context rescaled to 1 replicas
	W1123 08:57:44.776064  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	W1123 08:57:47.276176  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	W1123 08:57:49.777083  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	W1123 08:57:52.276267  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	W1123 08:57:54.776135  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	I1123 08:57:55.275780  205557 node_ready.go:49] node "old-k8s-version-132097" is "Ready"
	I1123 08:57:55.275810  205557 node_ready.go:38] duration metric: took 12.503373096s for node "old-k8s-version-132097" to be "Ready" ...
	I1123 08:57:55.275825  205557 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:57:55.275887  205557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:57:55.298771  205557 api_server.go:72] duration metric: took 13.782792148s to wait for apiserver process to appear ...
	I1123 08:57:55.298802  205557 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:57:55.298823  205557 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:57:55.308923  205557 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:57:55.310591  205557 api_server.go:141] control plane version: v1.28.0
	I1123 08:57:55.310647  205557 api_server.go:131] duration metric: took 11.836833ms to wait for apiserver health ...
	I1123 08:57:55.310657  205557 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:57:55.316780  205557 system_pods.go:59] 8 kube-system pods found
	I1123 08:57:55.316822  205557 system_pods.go:61] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:55.316829  205557 system_pods.go:61] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:55.316834  205557 system_pods.go:61] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:55.316838  205557 system_pods.go:61] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:55.316842  205557 system_pods.go:61] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:55.316846  205557 system_pods.go:61] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:55.316849  205557 system_pods.go:61] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:55.316854  205557 system_pods.go:61] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:55.316859  205557 system_pods.go:74] duration metric: took 6.197648ms to wait for pod list to return data ...
	I1123 08:57:55.316867  205557 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:57:55.319559  205557 default_sa.go:45] found service account: "default"
	I1123 08:57:55.319588  205557 default_sa.go:55] duration metric: took 2.714097ms for default service account to be created ...
	I1123 08:57:55.319598  205557 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:57:55.324396  205557 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:55.324478  205557 system_pods.go:89] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:55.324492  205557 system_pods.go:89] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:55.324499  205557 system_pods.go:89] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:55.324504  205557 system_pods.go:89] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:55.324510  205557 system_pods.go:89] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:55.324514  205557 system_pods.go:89] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:55.324517  205557 system_pods.go:89] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:55.324523  205557 system_pods.go:89] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:55.324553  205557 retry.go:31] will retry after 265.819586ms: missing components: kube-dns
	I1123 08:57:55.596189  205557 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:55.596222  205557 system_pods.go:89] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:55.596230  205557 system_pods.go:89] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:55.596237  205557 system_pods.go:89] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:55.596241  205557 system_pods.go:89] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:55.596246  205557 system_pods.go:89] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:55.596250  205557 system_pods.go:89] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:55.596254  205557 system_pods.go:89] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:55.596263  205557 system_pods.go:89] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:55.596281  205557 retry.go:31] will retry after 289.288774ms: missing components: kube-dns
	I1123 08:57:55.890405  205557 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:55.890438  205557 system_pods.go:89] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:55.890446  205557 system_pods.go:89] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:55.890452  205557 system_pods.go:89] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:55.890457  205557 system_pods.go:89] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:55.890462  205557 system_pods.go:89] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:55.890466  205557 system_pods.go:89] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:55.890470  205557 system_pods.go:89] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:55.890476  205557 system_pods.go:89] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:55.890495  205557 retry.go:31] will retry after 460.275032ms: missing components: kube-dns
	I1123 08:57:56.354648  205557 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:56.354681  205557 system_pods.go:89] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Running
	I1123 08:57:56.354689  205557 system_pods.go:89] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:56.354693  205557 system_pods.go:89] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:56.354698  205557 system_pods.go:89] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:56.354703  205557 system_pods.go:89] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:56.354707  205557 system_pods.go:89] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:56.354711  205557 system_pods.go:89] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:56.354715  205557 system_pods.go:89] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Running
	I1123 08:57:56.354723  205557 system_pods.go:126] duration metric: took 1.035119204s to wait for k8s-apps to be running ...
	I1123 08:57:56.354734  205557 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:57:56.354790  205557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:57:56.367765  205557 system_svc.go:56] duration metric: took 13.022413ms WaitForService to wait for kubelet
	I1123 08:57:56.367791  205557 kubeadm.go:587] duration metric: took 14.851819067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:57:56.367810  205557 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:57:56.370552  205557 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:57:56.370585  205557 node_conditions.go:123] node cpu capacity is 2
	I1123 08:57:56.370598  205557 node_conditions.go:105] duration metric: took 2.78248ms to run NodePressure ...
	I1123 08:57:56.370610  205557 start.go:242] waiting for startup goroutines ...
	I1123 08:57:56.370618  205557 start.go:247] waiting for cluster config update ...
	I1123 08:57:56.370629  205557 start.go:256] writing updated cluster config ...
	I1123 08:57:56.370907  205557 ssh_runner.go:195] Run: rm -f paused
	I1123 08:57:56.374397  205557 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:57:56.378555  205557 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-8lvr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.383880  205557 pod_ready.go:94] pod "coredns-5dd5756b68-8lvr2" is "Ready"
	I1123 08:57:56.383903  205557 pod_ready.go:86] duration metric: took 5.319427ms for pod "coredns-5dd5756b68-8lvr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.387774  205557 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.393131  205557 pod_ready.go:94] pod "etcd-old-k8s-version-132097" is "Ready"
	I1123 08:57:56.393163  205557 pod_ready.go:86] duration metric: took 5.363924ms for pod "etcd-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.396395  205557 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.401275  205557 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-132097" is "Ready"
	I1123 08:57:56.401341  205557 pod_ready.go:86] duration metric: took 4.918249ms for pod "kube-apiserver-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.404345  205557 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.778738  205557 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-132097" is "Ready"
	I1123 08:57:56.778766  205557 pod_ready.go:86] duration metric: took 374.394138ms for pod "kube-controller-manager-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.980067  205557 pod_ready.go:83] waiting for pod "kube-proxy-6lfm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:57.378396  205557 pod_ready.go:94] pod "kube-proxy-6lfm7" is "Ready"
	I1123 08:57:57.378426  205557 pod_ready.go:86] duration metric: took 398.283527ms for pod "kube-proxy-6lfm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:57.579469  205557 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:57.978899  205557 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-132097" is "Ready"
	I1123 08:57:57.978928  205557 pod_ready.go:86] duration metric: took 399.387809ms for pod "kube-scheduler-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:57.978942  205557 pod_ready.go:40] duration metric: took 1.604514744s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:57:58.040107  205557 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 08:57:58.043481  205557 out.go:203] 
	W1123 08:57:58.046479  205557 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:57:58.049437  205557 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:57:58.053044  205557 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-132097" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	443f41771e05c       1611cd07b61d5       7 seconds ago       Running             busybox                   0                   85abf670da81c       busybox                                          default
	d791bd8ea3407       97e04611ad434       13 seconds ago      Running             coredns                   0                   dcc7b0a9a30b4       coredns-5dd5756b68-8lvr2                         kube-system
	028b80af9b9ff       ba04bb24b9575       13 seconds ago      Running             storage-provisioner       0                   3166da96aba38       storage-provisioner                              kube-system
	262637053c8ee       b1a8c6f707935       24 seconds ago      Running             kindnet-cni               0                   4589180347fb2       kindnet-4qsxx                                    kube-system
	5ce0118136c4e       940f54a5bcae9       26 seconds ago      Running             kube-proxy                0                   06f0effc91df6       kube-proxy-6lfm7                                 kube-system
	e18db1e58c264       9cdd6470f48c8       46 seconds ago      Running             etcd                      0                   d406878ab6eec       etcd-old-k8s-version-132097                      kube-system
	a39215833acd6       46cc66ccc7c19       46 seconds ago      Running             kube-controller-manager   0                   faa81e5d64709       kube-controller-manager-old-k8s-version-132097   kube-system
	91ed0354df278       00543d2fe5d71       46 seconds ago      Running             kube-apiserver            0                   7e599901c226d       kube-apiserver-old-k8s-version-132097            kube-system
	2c9790d4aacf8       762dce4090c5f       46 seconds ago      Running             kube-scheduler            0                   062cc54bf41d6       kube-scheduler-old-k8s-version-132097            kube-system
	
	
	==> containerd <==
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.335218943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8lvr2,Uid:785f330b-add9-400f-a67a-7b6363a1c87e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc7b0a9a30b4707a452574e5a16a4dac60a5d6943c039246545f5965671213f\""
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.344990342Z" level=info msg="CreateContainer within sandbox \"dcc7b0a9a30b4707a452574e5a16a4dac60a5d6943c039246545f5965671213f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.368459130Z" level=info msg="Container d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.382472405Z" level=info msg="CreateContainer within sandbox \"dcc7b0a9a30b4707a452574e5a16a4dac60a5d6943c039246545f5965671213f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f\""
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.389456614Z" level=info msg="StartContainer for \"d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f\""
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.390518148Z" level=info msg="connecting to shim d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f" address="unix:///run/containerd/s/cc0e752ae5af84c3942e02fc76087b3bc54562e675958bf09a37c137ca348912" protocol=ttrpc version=3
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.411234282Z" level=info msg="StartContainer for \"028b80af9b9ff071ccba6aaf458f6ef6e09b02b89f1b1e2d2504daa0b06b4ffd\" returns successfully"
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.484449976Z" level=info msg="StartContainer for \"d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f\" returns successfully"
	Nov 23 08:57:58 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:58.571119104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ee3865ff-dc6d-4911-94c7-09b6024edb7c,Namespace:default,Attempt:0,}"
	Nov 23 08:57:58 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:58.630842341Z" level=info msg="connecting to shim 85abf670da81c7a87741da49cde2f83dbe22def5dc9b21d63e3048ccf846a4e2" address="unix:///run/containerd/s/1893f1c067c0462f977516acb9a1a26aa54cab5ca38da35332e88ebfed76c954" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:57:58 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:58.691245339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ee3865ff-dc6d-4911-94c7-09b6024edb7c,Namespace:default,Attempt:0,} returns sandbox id \"85abf670da81c7a87741da49cde2f83dbe22def5dc9b21d63e3048ccf846a4e2\""
	Nov 23 08:57:58 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:58.695618499Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.904262576Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.906208648Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.908680725Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.911903120Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.912621913Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.216959869s"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.912733455Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.914630715Z" level=info msg="CreateContainer within sandbox \"85abf670da81c7a87741da49cde2f83dbe22def5dc9b21d63e3048ccf846a4e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.932438379Z" level=info msg="Container 443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.943426340Z" level=info msg="CreateContainer within sandbox \"85abf670da81c7a87741da49cde2f83dbe22def5dc9b21d63e3048ccf846a4e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1\""
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.944475517Z" level=info msg="StartContainer for \"443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1\""
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.945415679Z" level=info msg="connecting to shim 443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1" address="unix:///run/containerd/s/1893f1c067c0462f977516acb9a1a26aa54cab5ca38da35332e88ebfed76c954" protocol=ttrpc version=3
	Nov 23 08:58:01 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:01.008332789Z" level=info msg="StartContainer for \"443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1\" returns successfully"
	Nov 23 08:58:07 old-k8s-version-132097 containerd[756]: E1123 08:58:07.395806     756 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36924 - 1492 "HINFO IN 1719261008560871152.4772551208800573690. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031590225s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-132097
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-132097
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-132097
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_57_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:57:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-132097
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:57:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:57:59 +0000   Sun, 23 Nov 2025 08:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:57:59 +0000   Sun, 23 Nov 2025 08:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:57:59 +0000   Sun, 23 Nov 2025 08:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:57:59 +0000   Sun, 23 Nov 2025 08:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-132097
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                aede0b78-2473-402b-93c1-c2cfe1396e63
	  Boot ID:                    86d8501c-1df5-4d7e-90cb-d9ad951202c5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-5dd5756b68-8lvr2                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     27s
	  kube-system                 etcd-old-k8s-version-132097                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-4qsxx                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      27s
	  kube-system                 kube-apiserver-old-k8s-version-132097             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-132097    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-6lfm7                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-scheduler-old-k8s-version-132097             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 40s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s   kubelet          Node old-k8s-version-132097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s   kubelet          Node old-k8s-version-132097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s   kubelet          Node old-k8s-version-132097 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s   node-controller  Node old-k8s-version-132097 event: Registered Node old-k8s-version-132097 in Controller
	  Normal  NodeReady                14s   kubelet          Node old-k8s-version-132097 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014670] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505841] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033008] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738583] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.057424] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:26] hrtimer: interrupt took 58442338 ns
	
	
	==> etcd [e18db1e58c2640e7866f60c485d7eab1e2a24b23fac62c295a037323425437fe] <==
	{"level":"info","ts":"2025-11-23T08:57:21.955003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T08:57:21.955252Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:57:21.965954Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:57:21.966169Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:57:21.963226Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:57:21.967095Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:57:21.967222Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:57:22.382442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T08:57:22.382712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T08:57:22.382851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-23T08:57:22.382965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:57:22.383049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T08:57:22.383141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-23T08:57:22.383243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T08:57:22.385012Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-132097 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:57:22.385184Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:57:22.385575Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:57:22.387543Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-23T08:57:22.391753Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:57:22.392593Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:57:22.395676Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:57:22.395279Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:57:22.399318Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:57:22.400444Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:57:22.403395Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 08:58:08 up  1:40,  0 user,  load average: 3.82, 3.96, 2.97
	Linux old-k8s-version-132097 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [262637053c8ee625a3046202afe327f1cbb2db99ae7880d1f98b14410b912320] <==
	I1123 08:57:44.423888       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:57:44.424125       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:57:44.424308       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:57:44.424328       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:57:44.424342       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:57:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:57:44.722383       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:57:44.722540       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:57:44.722606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:57:44.723642       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:57:44.923167       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:57:44.923194       1 metrics.go:72] Registering metrics
	I1123 08:57:44.923246       1 controller.go:711] "Syncing nftables rules"
	I1123 08:57:54.720199       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:57:54.720264       1 main.go:301] handling current node
	I1123 08:58:04.720328       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:58:04.720420       1 main.go:301] handling current node
	
	
	==> kube-apiserver [91ed0354df2789371f586deb2efd76f6b353c168f5d0057d043d9428ec15a073] <==
	I1123 08:57:25.646741       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 08:57:25.648680       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:57:25.650094       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:57:25.651731       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:57:25.651826       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:57:25.651843       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:57:25.651850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:57:25.651856       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:57:25.683457       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 08:57:25.691275       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:57:26.455551       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:57:26.462134       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:57:26.462425       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:57:27.201994       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:57:27.249248       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:57:27.384920       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:57:27.399717       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:57:27.401749       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:57:27.408887       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:57:27.638262       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:57:28.790710       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:57:28.805005       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:57:28.821606       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:57:41.246676       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 08:57:41.400035       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a39215833acd606b990fcfb98f27976ee37011ce0a9e89cbfe4edd8da0204682] <==
	I1123 08:57:40.644795       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:57:40.648914       1 shared_informer.go:318] Caches are synced for daemon sets
	I1123 08:57:40.669568       1 shared_informer.go:318] Caches are synced for attach detach
	I1123 08:57:41.034219       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:57:41.069125       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:57:41.069338       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:57:41.253094       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:57:41.417459       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4qsxx"
	I1123 08:57:41.428699       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6lfm7"
	I1123 08:57:41.506669       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-7zb7t"
	I1123 08:57:41.549168       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8lvr2"
	I1123 08:57:41.572108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="318.669074ms"
	I1123 08:57:41.607804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.196229ms"
	I1123 08:57:41.619720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="137.134µs"
	I1123 08:57:42.827853       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:57:42.842789       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-7zb7t"
	I1123 08:57:42.860071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.403347ms"
	I1123 08:57:42.872477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.358791ms"
	I1123 08:57:42.873476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.181µs"
	I1123 08:57:54.831574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.628µs"
	I1123 08:57:54.871654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.524µs"
	I1123 08:57:55.592477       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 08:57:56.191547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.086µs"
	I1123 08:57:56.232618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.087889ms"
	I1123 08:57:56.232830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.526µs"
	
	
	==> kube-proxy [5ce0118136c4ec2de35306557d9fc06af6aaa216d5785486748a430eb80abf06] <==
	I1123 08:57:42.278043       1 server_others.go:69] "Using iptables proxy"
	I1123 08:57:42.295124       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 08:57:42.346235       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:57:42.348370       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:57:42.348415       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:57:42.348424       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:57:42.348449       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:57:42.348648       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:57:42.348666       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:57:42.349577       1 config.go:188] "Starting service config controller"
	I1123 08:57:42.349617       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:57:42.349642       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:57:42.349650       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:57:42.354459       1 config.go:315] "Starting node config controller"
	I1123 08:57:42.354505       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:57:42.449923       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:57:42.449978       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:57:42.455480       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2c9790d4aacf8186f2b6020db5c0fcf23b94a4dd352884aa58bf5601213b7764] <==
	W1123 08:57:26.643688       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 08:57:26.643732       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:57:26.643702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 08:57:26.643870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 08:57:26.647635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 08:57:26.647669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 08:57:26.647856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 08:57:26.647879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 08:57:26.648580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:57:26.648648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:57:26.652463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:57:26.652498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 08:57:26.653338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 08:57:26.653365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:57:26.653581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 08:57:26.653634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 08:57:26.653731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:57:26.653748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:57:26.653800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 08:57:26.653814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 08:57:26.653902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 08:57:26.653916       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 08:57:26.653948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 08:57:26.653962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1123 08:57:27.838013       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:57:40 old-k8s-version-132097 kubelet[1531]: I1123 08:57:40.615021    1531 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.438219    1531 topology_manager.go:215] "Topology Admit Handler" podUID="460cc36d-ef1c-42af-a119-d8b5e5a667f3" podNamespace="kube-system" podName="kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.458829    1531 topology_manager.go:215] "Topology Admit Handler" podUID="4a3801eb-3ef6-464e-85cf-292e08e28bb7" podNamespace="kube-system" podName="kube-proxy-6lfm7"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.576882    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a3801eb-3ef6-464e-85cf-292e08e28bb7-kube-proxy\") pod \"kube-proxy-6lfm7\" (UID: \"4a3801eb-3ef6-464e-85cf-292e08e28bb7\") " pod="kube-system/kube-proxy-6lfm7"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577180    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a3801eb-3ef6-464e-85cf-292e08e28bb7-xtables-lock\") pod \"kube-proxy-6lfm7\" (UID: \"4a3801eb-3ef6-464e-85cf-292e08e28bb7\") " pod="kube-system/kube-proxy-6lfm7"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577426    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/460cc36d-ef1c-42af-a119-d8b5e5a667f3-xtables-lock\") pod \"kindnet-4qsxx\" (UID: \"460cc36d-ef1c-42af-a119-d8b5e5a667f3\") " pod="kube-system/kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577602    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/460cc36d-ef1c-42af-a119-d8b5e5a667f3-lib-modules\") pod \"kindnet-4qsxx\" (UID: \"460cc36d-ef1c-42af-a119-d8b5e5a667f3\") " pod="kube-system/kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577765    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a3801eb-3ef6-464e-85cf-292e08e28bb7-lib-modules\") pod \"kube-proxy-6lfm7\" (UID: \"4a3801eb-3ef6-464e-85cf-292e08e28bb7\") " pod="kube-system/kube-proxy-6lfm7"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577940    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/460cc36d-ef1c-42af-a119-d8b5e5a667f3-cni-cfg\") pod \"kindnet-4qsxx\" (UID: \"460cc36d-ef1c-42af-a119-d8b5e5a667f3\") " pod="kube-system/kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.578101    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpf9\" (UniqueName: \"kubernetes.io/projected/460cc36d-ef1c-42af-a119-d8b5e5a667f3-kube-api-access-rjpf9\") pod \"kindnet-4qsxx\" (UID: \"460cc36d-ef1c-42af-a119-d8b5e5a667f3\") " pod="kube-system/kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.578330    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jnhp\" (UniqueName: \"kubernetes.io/projected/4a3801eb-3ef6-464e-85cf-292e08e28bb7-kube-api-access-7jnhp\") pod \"kube-proxy-6lfm7\" (UID: \"4a3801eb-3ef6-464e-85cf-292e08e28bb7\") " pod="kube-system/kube-proxy-6lfm7"
	Nov 23 08:57:43 old-k8s-version-132097 kubelet[1531]: I1123 08:57:43.151928    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6lfm7" podStartSLOduration=2.15188228 podCreationTimestamp="2025-11-23 08:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:57:43.151723927 +0000 UTC m=+14.389401589" watchObservedRunningTime="2025-11-23 08:57:43.15188228 +0000 UTC m=+14.389559925"
	Nov 23 08:57:45 old-k8s-version-132097 kubelet[1531]: I1123 08:57:45.193171    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4qsxx" podStartSLOduration=2.184281555 podCreationTimestamp="2025-11-23 08:57:41 +0000 UTC" firstStartedPulling="2025-11-23 08:57:42.069696835 +0000 UTC m=+13.307374463" lastFinishedPulling="2025-11-23 08:57:44.078534141 +0000 UTC m=+15.316211770" observedRunningTime="2025-11-23 08:57:45.190347369 +0000 UTC m=+16.428025023" watchObservedRunningTime="2025-11-23 08:57:45.193118862 +0000 UTC m=+16.430796499"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.786544    1531 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.830913    1531 topology_manager.go:215] "Topology Admit Handler" podUID="785f330b-add9-400f-a67a-7b6363a1c87e" podNamespace="kube-system" podName="coredns-5dd5756b68-8lvr2"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.834405    1531 topology_manager.go:215] "Topology Admit Handler" podUID="58fb64b1-807f-49e7-9c48-681619d898c6" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.874977    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcns6\" (UniqueName: \"kubernetes.io/projected/58fb64b1-807f-49e7-9c48-681619d898c6-kube-api-access-jcns6\") pod \"storage-provisioner\" (UID: \"58fb64b1-807f-49e7-9c48-681619d898c6\") " pod="kube-system/storage-provisioner"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.875052    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/785f330b-add9-400f-a67a-7b6363a1c87e-config-volume\") pod \"coredns-5dd5756b68-8lvr2\" (UID: \"785f330b-add9-400f-a67a-7b6363a1c87e\") " pod="kube-system/coredns-5dd5756b68-8lvr2"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.875094    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58fb64b1-807f-49e7-9c48-681619d898c6-tmp\") pod \"storage-provisioner\" (UID: \"58fb64b1-807f-49e7-9c48-681619d898c6\") " pod="kube-system/storage-provisioner"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.875132    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfqcr\" (UniqueName: \"kubernetes.io/projected/785f330b-add9-400f-a67a-7b6363a1c87e-kube-api-access-dfqcr\") pod \"coredns-5dd5756b68-8lvr2\" (UID: \"785f330b-add9-400f-a67a-7b6363a1c87e\") " pod="kube-system/coredns-5dd5756b68-8lvr2"
	Nov 23 08:57:56 old-k8s-version-132097 kubelet[1531]: I1123 08:57:56.213611    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8lvr2" podStartSLOduration=15.213558978 podCreationTimestamp="2025-11-23 08:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:57:56.192056283 +0000 UTC m=+27.429733920" watchObservedRunningTime="2025-11-23 08:57:56.213558978 +0000 UTC m=+27.451236607"
	Nov 23 08:57:58 old-k8s-version-132097 kubelet[1531]: I1123 08:57:58.264381    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.264335314 podCreationTimestamp="2025-11-23 08:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:57:56.246040076 +0000 UTC m=+27.483717705" watchObservedRunningTime="2025-11-23 08:57:58.264335314 +0000 UTC m=+29.502012943"
	Nov 23 08:57:58 old-k8s-version-132097 kubelet[1531]: I1123 08:57:58.264678    1531 topology_manager.go:215] "Topology Admit Handler" podUID="ee3865ff-dc6d-4911-94c7-09b6024edb7c" podNamespace="default" podName="busybox"
	Nov 23 08:57:58 old-k8s-version-132097 kubelet[1531]: I1123 08:57:58.299576    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9r4f\" (UniqueName: \"kubernetes.io/projected/ee3865ff-dc6d-4911-94c7-09b6024edb7c-kube-api-access-b9r4f\") pod \"busybox\" (UID: \"ee3865ff-dc6d-4911-94c7-09b6024edb7c\") " pod="default/busybox"
	Nov 23 08:58:01 old-k8s-version-132097 kubelet[1531]: I1123 08:58:01.221466    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.000997351 podCreationTimestamp="2025-11-23 08:57:58 +0000 UTC" firstStartedPulling="2025-11-23 08:57:58.692634622 +0000 UTC m=+29.930312259" lastFinishedPulling="2025-11-23 08:58:00.913057446 +0000 UTC m=+32.150735083" observedRunningTime="2025-11-23 08:58:01.221072332 +0000 UTC m=+32.458749960" watchObservedRunningTime="2025-11-23 08:58:01.221420175 +0000 UTC m=+32.459097813"
	
	
	==> storage-provisioner [028b80af9b9ff071ccba6aaf458f6ef6e09b02b89f1b1e2d2504daa0b06b4ffd] <==
	I1123 08:57:55.420215       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:57:55.444955       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:57:55.445179       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:57:55.459038       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:57:55.462063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-132097_336630f9-1c2a-4210-93a8-d3736b9ac669!
	I1123 08:57:55.464703       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a8c1278-4d51-4a69-a242-d47431dca2ba", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-132097_336630f9-1c2a-4210-93a8-d3736b9ac669 became leader
	I1123 08:57:55.563109       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-132097_336630f9-1c2a-4210-93a8-d3736b9ac669!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-132097 -n old-k8s-version-132097
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-132097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-132097
helpers_test.go:243: (dbg) docker inspect old-k8s-version-132097:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef",
	        "Created": "2025-11-23T08:57:04.667839157Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 205947,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:57:04.769159087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef/hostname",
	        "HostsPath": "/var/lib/docker/containers/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef/hosts",
	        "LogPath": "/var/lib/docker/containers/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef/4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef-json.log",
	        "Name": "/old-k8s-version-132097",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-132097:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-132097",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4d0452bb4c92012df572e99c794a7d8e79d0c9214562a3efa8e8a3ae1ddbb7ef",
	                "LowerDir": "/var/lib/docker/overlay2/ec3c232564eb8b8c04270c5b0c95eedb013a5868deed42f0509c302335a2d989-init/diff:/var/lib/docker/overlay2/e1de88c117c0c773e1fa636243190fd97eadaa5a8e1ee08fd53827cbac767d35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ec3c232564eb8b8c04270c5b0c95eedb013a5868deed42f0509c302335a2d989/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ec3c232564eb8b8c04270c5b0c95eedb013a5868deed42f0509c302335a2d989/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ec3c232564eb8b8c04270c5b0c95eedb013a5868deed42f0509c302335a2d989/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-132097",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-132097/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-132097",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-132097",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-132097",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "58c55dbb2ab6b018b92d4ec3c5691fa02993e68f8d136bf1df6a3c7e37ab8808",
	            "SandboxKey": "/var/run/docker/netns/58c55dbb2ab6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33058"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-132097": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:9b:d3:73:5f:30",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "109fa1b68d0f825ac67c625cd8049aeedd7e3d80891821156d3bdfaf1d82aaa5",
	                    "EndpointID": "c9cddc19ad17439228c12a59f80e1b67ed89c3737c3d6b86f08ac0b30fc26527",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-132097",
	                        "4d0452bb4c92"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-132097 -n old-k8s-version-132097
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-132097 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-132097 logs -n 25: (1.208844501s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-694698 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                           │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                           │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo containerd config dump                                                                                                                                                                                                        │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ ssh     │ -p cilium-694698 sudo crio config                                                                                                                                                                                                                   │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p cilium-694698                                                                                                                                                                                                                                    │ cilium-694698             │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p force-systemd-env-023309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-023309  │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p kubernetes-upgrade-291582                                                                                                                                                                                                                        │ kubernetes-upgrade-291582 │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-918102    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ force-systemd-env-023309 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-023309  │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p force-systemd-env-023309                                                                                                                                                                                                                         │ force-systemd-env-023309  │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-options-886452 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-886452       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ cert-options-886452 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-886452       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ -p cert-options-886452 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-886452       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-options-886452                                                                                                                                                                                                                              │ cert-options-886452       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097    │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:56:58
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:56:58.075716  205557 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:56:58.075955  205557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:56:58.075994  205557 out.go:374] Setting ErrFile to fd 2...
	I1123 08:56:58.076024  205557 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:56:58.076443  205557 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:56:58.077075  205557 out.go:368] Setting JSON to false
	I1123 08:56:58.078143  205557 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5970,"bootTime":1763882248,"procs":183,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:56:58.078269  205557 start.go:143] virtualization:  
	I1123 08:56:58.082214  205557 out.go:179] * [old-k8s-version-132097] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:56:58.087005  205557 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:56:58.087084  205557 notify.go:221] Checking for updates...
	I1123 08:56:58.094076  205557 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:56:58.097487  205557 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:56:58.100734  205557 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:56:58.103928  205557 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:56:58.107100  205557 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:56:58.110841  205557 config.go:182] Loaded profile config "cert-expiration-918102": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:56:58.111023  205557 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:56:58.149865  205557 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:56:58.150008  205557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:56:58.209508  205557 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:56:58.200005902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:56:58.209615  205557 docker.go:319] overlay module found
	I1123 08:56:58.212913  205557 out.go:179] * Using the docker driver based on user configuration
	I1123 08:56:58.216078  205557 start.go:309] selected driver: docker
	I1123 08:56:58.216104  205557 start.go:927] validating driver "docker" against <nil>
	I1123 08:56:58.216119  205557 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:56:58.216866  205557 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:56:58.280338  205557 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:56:58.270474537 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:56:58.280506  205557 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:56:58.280724  205557 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:56:58.283780  205557 out.go:179] * Using Docker driver with root privileges
	I1123 08:56:58.286736  205557 cni.go:84] Creating CNI manager for ""
	I1123 08:56:58.286810  205557 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:56:58.286826  205557 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:56:58.286923  205557 start.go:353] cluster config:
	{Name:old-k8s-version-132097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:56:58.292003  205557 out.go:179] * Starting "old-k8s-version-132097" primary control-plane node in "old-k8s-version-132097" cluster
	I1123 08:56:58.294873  205557 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:56:58.297758  205557 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:56:58.300686  205557 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:56:58.300734  205557 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 08:56:58.300747  205557 cache.go:65] Caching tarball of preloaded images
	I1123 08:56:58.300769  205557 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:56:58.300828  205557 preload.go:238] Found /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:56:58.300838  205557 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:56:58.300946  205557 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/config.json ...
	I1123 08:56:58.300964  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/config.json: {Name:mk1988d6b954c625d3bd1df0ce00c5571f04128f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:56:58.320967  205557 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:56:58.320991  205557 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:56:58.321007  205557 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:56:58.321037  205557 start.go:360] acquireMachinesLock for old-k8s-version-132097: {Name:mk569d745a741486fc2918f879c45baa624a6ce4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:56:58.321139  205557 start.go:364] duration metric: took 82.184µs to acquireMachinesLock for "old-k8s-version-132097"
	I1123 08:56:58.321169  205557 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-132097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:56:58.321245  205557 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:56:58.324673  205557 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:56:58.324908  205557 start.go:159] libmachine.API.Create for "old-k8s-version-132097" (driver="docker")
	I1123 08:56:58.324945  205557 client.go:173] LocalClient.Create starting
	I1123 08:56:58.325029  205557 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem
	I1123 08:56:58.325064  205557 main.go:143] libmachine: Decoding PEM data...
	I1123 08:56:58.325084  205557 main.go:143] libmachine: Parsing certificate...
	I1123 08:56:58.325137  205557 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem
	I1123 08:56:58.325165  205557 main.go:143] libmachine: Decoding PEM data...
	I1123 08:56:58.325181  205557 main.go:143] libmachine: Parsing certificate...
	I1123 08:56:58.325539  205557 cli_runner.go:164] Run: docker network inspect old-k8s-version-132097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:56:58.341601  205557 cli_runner.go:211] docker network inspect old-k8s-version-132097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:56:58.341689  205557 network_create.go:284] running [docker network inspect old-k8s-version-132097] to gather additional debugging logs...
	I1123 08:56:58.341707  205557 cli_runner.go:164] Run: docker network inspect old-k8s-version-132097
	W1123 08:56:58.358311  205557 cli_runner.go:211] docker network inspect old-k8s-version-132097 returned with exit code 1
	I1123 08:56:58.358345  205557 network_create.go:287] error running [docker network inspect old-k8s-version-132097]: docker network inspect old-k8s-version-132097: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-132097 not found
	I1123 08:56:58.358358  205557 network_create.go:289] output of [docker network inspect old-k8s-version-132097]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-132097 not found
	
	** /stderr **
	I1123 08:56:58.358575  205557 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:56:58.374942  205557 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ab12b2c3b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:c9:6d:7b:80:76} reservation:<nil>}
	I1123 08:56:58.375286  205557 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7f5e4a52a57c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:1a:79:b2:02:66} reservation:<nil>}
	I1123 08:56:58.375689  205557 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed031858d624 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:47:7d:04:56:4a} reservation:<nil>}
	I1123 08:56:58.375909  205557 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-7b189b3c67c1 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:f6:56:52:0a:44:1f} reservation:<nil>}
	I1123 08:56:58.376301  205557 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a0b0e0}
	I1123 08:56:58.376319  205557 network_create.go:124] attempt to create docker network old-k8s-version-132097 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1123 08:56:58.376383  205557 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-132097 old-k8s-version-132097
	I1123 08:56:58.438591  205557 network_create.go:108] docker network old-k8s-version-132097 192.168.85.0/24 created
	I1123 08:56:58.438623  205557 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-132097" container
	I1123 08:56:58.438715  205557 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:56:58.457835  205557 cli_runner.go:164] Run: docker volume create old-k8s-version-132097 --label name.minikube.sigs.k8s.io=old-k8s-version-132097 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:56:58.476578  205557 oci.go:103] Successfully created a docker volume old-k8s-version-132097
	I1123 08:56:58.476673  205557 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-132097-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-132097 --entrypoint /usr/bin/test -v old-k8s-version-132097:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:56:58.996286  205557 oci.go:107] Successfully prepared a docker volume old-k8s-version-132097
	I1123 08:56:58.996363  205557 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:56:58.996381  205557 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:56:58.996448  205557 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-132097:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:57:04.592308  205557 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-132097:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.595821587s)
	I1123 08:57:04.592349  205557 kic.go:203] duration metric: took 5.595965588s to extract preloaded images to volume ...
	W1123 08:57:04.592490  205557 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:57:04.592606  205557 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:57:04.651143  205557 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-132097 --name old-k8s-version-132097 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-132097 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-132097 --network old-k8s-version-132097 --ip 192.168.85.2 --volume old-k8s-version-132097:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:57:05.027333  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Running}}
	I1123 08:57:05.053202  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:05.077042  205557 cli_runner.go:164] Run: docker exec old-k8s-version-132097 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:57:05.138471  205557 oci.go:144] the created container "old-k8s-version-132097" has a running status.
	I1123 08:57:05.138504  205557 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa...
	I1123 08:57:05.830255  205557 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:57:05.850811  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:05.869025  205557 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:57:05.869046  205557 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-132097 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:57:05.911771  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:05.939737  205557 machine.go:94] provisionDockerMachine start ...
	I1123 08:57:05.939829  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:05.957917  205557 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:05.958259  205557 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1123 08:57:05.958282  205557 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:57:05.958921  205557 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:57:09.115043  205557 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-132097
	
	I1123 08:57:09.115066  205557 ubuntu.go:182] provisioning hostname "old-k8s-version-132097"
	I1123 08:57:09.115142  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:09.132902  205557 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:09.133227  205557 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1123 08:57:09.133247  205557 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-132097 && echo "old-k8s-version-132097" | sudo tee /etc/hostname
	I1123 08:57:09.292790  205557 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-132097
	
	I1123 08:57:09.292878  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:09.311023  205557 main.go:143] libmachine: Using SSH client type: native
	I1123 08:57:09.311437  205557 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33054 <nil> <nil>}
	I1123 08:57:09.311458  205557 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-132097' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-132097/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-132097' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:57:09.463645  205557 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:57:09.463673  205557 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:57:09.463693  205557 ubuntu.go:190] setting up certificates
	I1123 08:57:09.463703  205557 provision.go:84] configureAuth start
	I1123 08:57:09.463774  205557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132097
	I1123 08:57:09.490883  205557 provision.go:143] copyHostCerts
	I1123 08:57:09.490965  205557 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:57:09.490980  205557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:57:09.491064  205557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:57:09.491170  205557 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:57:09.491181  205557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:57:09.491218  205557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:57:09.491292  205557 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:57:09.491301  205557 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:57:09.491332  205557 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:57:09.491424  205557 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-132097 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-132097]
	I1123 08:57:09.995296  205557 provision.go:177] copyRemoteCerts
	I1123 08:57:09.995378  205557 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:57:09.995425  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.022197  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.131600  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1123 08:57:10.151668  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:57:10.170212  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:57:10.193623  205557 provision.go:87] duration metric: took 729.898059ms to configureAuth
	I1123 08:57:10.193653  205557 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:57:10.193893  205557 config.go:182] Loaded profile config "old-k8s-version-132097": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:57:10.193908  205557 machine.go:97] duration metric: took 4.254150104s to provisionDockerMachine
	I1123 08:57:10.193917  205557 client.go:176] duration metric: took 11.868961603s to LocalClient.Create
	I1123 08:57:10.193936  205557 start.go:167] duration metric: took 11.869028862s to libmachine.API.Create "old-k8s-version-132097"
	I1123 08:57:10.193949  205557 start.go:293] postStartSetup for "old-k8s-version-132097" (driver="docker")
	I1123 08:57:10.193959  205557 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:57:10.194028  205557 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:57:10.194072  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.214297  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.319610  205557 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:57:10.322901  205557 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:57:10.322933  205557 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:57:10.322946  205557 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:57:10.323005  205557 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:57:10.323088  205557 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:57:10.323198  205557 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:57:10.331074  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:57:10.349627  205557 start.go:296] duration metric: took 155.662635ms for postStartSetup
	I1123 08:57:10.350015  205557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132097
	I1123 08:57:10.367315  205557 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/config.json ...
	I1123 08:57:10.367640  205557 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:57:10.367695  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.384602  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.488274  205557 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:57:10.493105  205557 start.go:128] duration metric: took 12.171844318s to createHost
	I1123 08:57:10.493131  205557 start.go:83] releasing machines lock for "old-k8s-version-132097", held for 12.171978088s
	I1123 08:57:10.493204  205557 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-132097
	I1123 08:57:10.510164  205557 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:57:10.510257  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.510172  205557 ssh_runner.go:195] Run: cat /version.json
	I1123 08:57:10.510356  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:10.526416  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.545392  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:10.718091  205557 ssh_runner.go:195] Run: systemctl --version
	I1123 08:57:10.724586  205557 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:57:10.729451  205557 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:57:10.729547  205557 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:57:10.757493  205557 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:57:10.757566  205557 start.go:496] detecting cgroup driver to use...
	I1123 08:57:10.757615  205557 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:57:10.757691  205557 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:57:10.772292  205557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:57:10.785621  205557 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:57:10.785731  205557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:57:10.802884  205557 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:57:10.823678  205557 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:57:10.945776  205557 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:57:11.077075  205557 docker.go:234] disabling docker service ...
	I1123 08:57:11.077193  205557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:57:11.100476  205557 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:57:11.114000  205557 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:57:11.241484  205557 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:57:11.359238  205557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:57:11.371792  205557 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:57:11.385313  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1123 08:57:11.394176  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:57:11.403118  205557 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:57:11.403204  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:57:11.411891  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:57:11.420403  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:57:11.429844  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:57:11.438472  205557 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:57:11.448237  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:57:11.457958  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:57:11.467751  205557 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:57:11.477714  205557 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:57:11.485453  205557 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:57:11.492795  205557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:57:11.619250  205557 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:57:11.753073  205557 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:57:11.753220  205557 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:57:11.757105  205557 start.go:564] Will wait 60s for crictl version
	I1123 08:57:11.757216  205557 ssh_runner.go:195] Run: which crictl
	I1123 08:57:11.760762  205557 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:57:11.786059  205557 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:57:11.786175  205557 ssh_runner.go:195] Run: containerd --version
	I1123 08:57:11.808380  205557 ssh_runner.go:195] Run: containerd --version
	I1123 08:57:11.834126  205557 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1123 08:57:11.837087  205557 cli_runner.go:164] Run: docker network inspect old-k8s-version-132097 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:57:11.854569  205557 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:57:11.858321  205557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:57:11.868036  205557 kubeadm.go:884] updating cluster {Name:old-k8s-version-132097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:57:11.868180  205557 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:57:11.868250  205557 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:57:11.892439  205557 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:57:11.892464  205557 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:57:11.892528  205557 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:57:11.916323  205557 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:57:11.916348  205557 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:57:11.916357  205557 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1123 08:57:11.916452  205557 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-132097 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:57:11.916521  205557 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:57:11.955629  205557 cni.go:84] Creating CNI manager for ""
	I1123 08:57:11.955653  205557 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:57:11.955672  205557 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:57:11.955696  205557 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-132097 NodeName:old-k8s-version-132097 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:57:11.955852  205557 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-132097"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:57:11.955924  205557 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1123 08:57:11.964586  205557 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:57:11.964655  205557 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:57:11.972851  205557 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1123 08:57:11.985890  205557 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:57:11.999580  205557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1123 08:57:12.016007  205557 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:57:12.021860  205557 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:57:12.035856  205557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:57:12.145204  205557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:57:12.162111  205557 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097 for IP: 192.168.85.2
	I1123 08:57:12.162136  205557 certs.go:195] generating shared ca certs ...
	I1123 08:57:12.162153  205557 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.162293  205557 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:57:12.162343  205557 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:57:12.162358  205557 certs.go:257] generating profile certs ...
	I1123 08:57:12.162414  205557 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.key
	I1123 08:57:12.162430  205557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt with IP's: []
	I1123 08:57:12.346401  205557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt ...
	I1123 08:57:12.346432  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: {Name:mke988f2355e47aa3b3cecde8bcb924023bd7a1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.346632  205557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.key ...
	I1123 08:57:12.346659  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.key: {Name:mka607eb42432889fa6550a717949c1750577787 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.346751  205557 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key.82dac9d1
	I1123 08:57:12.346774  205557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt.82dac9d1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:57:12.400945  205557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt.82dac9d1 ...
	I1123 08:57:12.400971  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt.82dac9d1: {Name:mk94df40e4fc2d589b542121aa0a3b7e606816f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.401142  205557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key.82dac9d1 ...
	I1123 08:57:12.401155  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key.82dac9d1: {Name:mkfa0fabc149dbc3e492e0dda94c640912f6ea5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.401245  205557 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt.82dac9d1 -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt
	I1123 08:57:12.401335  205557 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key.82dac9d1 -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key
	I1123 08:57:12.401397  205557 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.key
	I1123 08:57:12.401417  205557 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.crt with IP's: []
	I1123 08:57:12.542191  205557 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.crt ...
	I1123 08:57:12.542223  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.crt: {Name:mk38a6f6cc975b1ab50cc4eb53e87eb31af36277 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.542397  205557 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.key ...
	I1123 08:57:12.542422  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.key: {Name:mkc1a85f87bbbc56731c9b0fb3a53076a0b001d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:12.542626  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:57:12.542674  205557 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:57:12.542689  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:57:12.542728  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:57:12.542758  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:57:12.542785  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:57:12.542835  205557 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:57:12.543488  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:57:12.562736  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:57:12.581539  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:57:12.600098  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:57:12.618329  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1123 08:57:12.636558  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:57:12.654553  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:57:12.671996  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:57:12.689823  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:57:12.708777  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:57:12.726519  205557 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:57:12.745649  205557 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:57:12.758922  205557 ssh_runner.go:195] Run: openssl version
	I1123 08:57:12.766336  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:57:12.775543  205557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:57:12.779107  205557 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:57:12.779170  205557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:57:12.822771  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:57:12.831257  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:57:12.839690  205557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:57:12.843488  205557 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:57:12.843554  205557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:57:12.886834  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:57:12.895581  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:57:12.903817  205557 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:57:12.911417  205557 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:57:12.911536  205557 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:57:12.969951  205557 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:57:12.979126  205557 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:57:12.983876  205557 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:57:12.983991  205557 kubeadm.go:401] StartCluster: {Name:old-k8s-version-132097 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-132097 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:57:12.984075  205557 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:57:12.984170  205557 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:57:13.014456  205557 cri.go:89] found id: ""
	I1123 08:57:13.014564  205557 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:57:13.022498  205557 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:57:13.030730  205557 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:57:13.030849  205557 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:57:13.039816  205557 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:57:13.039844  205557 kubeadm.go:158] found existing configuration files:
	
	I1123 08:57:13.039921  205557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:57:13.048545  205557 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:57:13.048634  205557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:57:13.057763  205557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:57:13.067097  205557 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:57:13.067210  205557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:57:13.075916  205557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:57:13.084413  205557 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:57:13.084486  205557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:57:13.092014  205557 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:57:13.100047  205557 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:57:13.100161  205557 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:57:13.107516  205557 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:57:13.192784  205557 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 08:57:13.276045  205557 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:57:28.892126  205557 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1123 08:57:28.892182  205557 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:57:28.892271  205557 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:57:28.892326  205557 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:57:28.892360  205557 kubeadm.go:319] OS: Linux
	I1123 08:57:28.892405  205557 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:57:28.892453  205557 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:57:28.892514  205557 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:57:28.892563  205557 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:57:28.892610  205557 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:57:28.892658  205557 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:57:28.892703  205557 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:57:28.892750  205557 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:57:28.892804  205557 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:57:28.892880  205557 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:57:28.892975  205557 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:57:28.893066  205557 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1123 08:57:28.893128  205557 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:57:28.896176  205557 out.go:252]   - Generating certificates and keys ...
	I1123 08:57:28.896279  205557 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:57:28.896359  205557 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:57:28.896428  205557 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:57:28.896485  205557 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:57:28.896550  205557 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:57:28.896612  205557 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:57:28.896674  205557 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:57:28.896842  205557 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-132097] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:57:28.896918  205557 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:57:28.897064  205557 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-132097] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:57:28.897134  205557 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:57:28.897198  205557 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:57:28.897245  205557 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:57:28.897300  205557 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:57:28.897351  205557 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:57:28.897404  205557 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:57:28.897467  205557 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:57:28.897521  205557 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:57:28.897602  205557 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:57:28.897668  205557 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:57:28.900674  205557 out.go:252]   - Booting up control plane ...
	I1123 08:57:28.900844  205557 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:57:28.900966  205557 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:57:28.901088  205557 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:57:28.901238  205557 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:57:28.901375  205557 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:57:28.901448  205557 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:57:28.901650  205557 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1123 08:57:28.901770  205557 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.001968 seconds
	I1123 08:57:28.901932  205557 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:57:28.902114  205557 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:57:28.902215  205557 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:57:28.902423  205557 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-132097 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:57:28.902483  205557 kubeadm.go:319] [bootstrap-token] Using token: 7z9j2b.hjlhuwa1mzqkz0w6
	I1123 08:57:28.905268  205557 out.go:252]   - Configuring RBAC rules ...
	I1123 08:57:28.905378  205557 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:57:28.905539  205557 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:57:28.905717  205557 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:57:28.905895  205557 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:57:28.906047  205557 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:57:28.906178  205557 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:57:28.906334  205557 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:57:28.906402  205557 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:57:28.906484  205557 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:57:28.906511  205557 kubeadm.go:319] 
	I1123 08:57:28.906602  205557 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:57:28.906628  205557 kubeadm.go:319] 
	I1123 08:57:28.906742  205557 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:57:28.906770  205557 kubeadm.go:319] 
	I1123 08:57:28.906817  205557 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:57:28.906917  205557 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:57:28.907030  205557 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:57:28.907060  205557 kubeadm.go:319] 
	I1123 08:57:28.907136  205557 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:57:28.907166  205557 kubeadm.go:319] 
	I1123 08:57:28.907236  205557 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:57:28.907272  205557 kubeadm.go:319] 
	I1123 08:57:28.907367  205557 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:57:28.907472  205557 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:57:28.907569  205557 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:57:28.907593  205557 kubeadm.go:319] 
	I1123 08:57:28.907711  205557 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:57:28.907823  205557 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:57:28.907850  205557 kubeadm.go:319] 
	I1123 08:57:28.907971  205557 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 7z9j2b.hjlhuwa1mzqkz0w6 \
	I1123 08:57:28.908112  205557 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 08:57:28.908155  205557 kubeadm.go:319] 	--control-plane 
	I1123 08:57:28.908180  205557 kubeadm.go:319] 
	I1123 08:57:28.908366  205557 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:57:28.908403  205557 kubeadm.go:319] 
	I1123 08:57:28.908523  205557 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 7z9j2b.hjlhuwa1mzqkz0w6 \
	I1123 08:57:28.908684  205557 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 08:57:28.908716  205557 cni.go:84] Creating CNI manager for ""
	I1123 08:57:28.908740  205557 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:57:28.911862  205557 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:57:28.914765  205557 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:57:28.919219  205557 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1123 08:57:28.919293  205557 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:57:28.961110  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:57:30.161029  205557 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.199828121s)
	I1123 08:57:30.161135  205557 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:57:30.161234  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:30.161393  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-132097 minikube.k8s.io/updated_at=2025_11_23T08_57_30_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=old-k8s-version-132097 minikube.k8s.io/primary=true
	I1123 08:57:30.356796  205557 ops.go:34] apiserver oom_adj: -16
	I1123 08:57:30.356993  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:30.857895  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:31.357933  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:31.857379  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:32.357617  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:32.857095  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:33.357190  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:33.857128  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:34.357322  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:34.857082  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:35.357894  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:35.857960  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:36.357415  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:36.857626  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:37.357099  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:37.857319  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:38.357164  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:38.857866  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:39.357380  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:39.857673  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:40.357901  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:40.857934  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:41.357100  205557 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:57:41.514583  205557 kubeadm.go:1114] duration metric: took 11.353416765s to wait for elevateKubeSystemPrivileges
	I1123 08:57:41.514628  205557 kubeadm.go:403] duration metric: took 28.530663979s to StartCluster
	I1123 08:57:41.514645  205557 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:41.514710  205557 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:57:41.515709  205557 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:57:41.515940  205557 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:57:41.516056  205557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:57:41.516313  205557 config.go:182] Loaded profile config "old-k8s-version-132097": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1123 08:57:41.516354  205557 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 08:57:41.516417  205557 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-132097"
	I1123 08:57:41.516438  205557 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-132097"
	I1123 08:57:41.516465  205557 host.go:66] Checking if "old-k8s-version-132097" exists ...
	I1123 08:57:41.516971  205557 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-132097"
	I1123 08:57:41.516998  205557 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-132097"
	I1123 08:57:41.517323  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:41.517601  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:41.520451  205557 out.go:179] * Verifying Kubernetes components...
	I1123 08:57:41.524128  205557 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:57:41.560944  205557 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-132097"
	I1123 08:57:41.560983  205557 host.go:66] Checking if "old-k8s-version-132097" exists ...
	I1123 08:57:41.561527  205557 cli_runner.go:164] Run: docker container inspect old-k8s-version-132097 --format={{.State.Status}}
	I1123 08:57:41.575551  205557 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:57:41.579480  205557 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:57:41.579503  205557 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:57:41.579570  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:41.597748  205557 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:57:41.597769  205557 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:57:41.597833  205557 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-132097
	I1123 08:57:41.630287  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:41.631253  205557 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/old-k8s-version-132097/id_rsa Username:docker}
	I1123 08:57:41.872614  205557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:57:41.889603  205557 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 08:57:41.889795  205557 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:57:41.890281  205557 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:57:42.772391  205557 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-132097" to be "Ready" ...
	I1123 08:57:42.772688  205557 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 08:57:43.203763  205557 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.313415514s)
	I1123 08:57:43.206929  205557 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1123 08:57:43.209915  205557 addons.go:530] duration metric: took 1.69355008s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 08:57:43.277106  205557 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-132097" context rescaled to 1 replicas
	W1123 08:57:44.776064  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	W1123 08:57:47.276176  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	W1123 08:57:49.777083  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	W1123 08:57:52.276267  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	W1123 08:57:54.776135  205557 node_ready.go:57] node "old-k8s-version-132097" has "Ready":"False" status (will retry)
	I1123 08:57:55.275780  205557 node_ready.go:49] node "old-k8s-version-132097" is "Ready"
	I1123 08:57:55.275810  205557 node_ready.go:38] duration metric: took 12.503373096s for node "old-k8s-version-132097" to be "Ready" ...
	I1123 08:57:55.275825  205557 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:57:55.275887  205557 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:57:55.298771  205557 api_server.go:72] duration metric: took 13.782792148s to wait for apiserver process to appear ...
	I1123 08:57:55.298802  205557 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:57:55.298823  205557 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 08:57:55.308923  205557 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 08:57:55.310591  205557 api_server.go:141] control plane version: v1.28.0
	I1123 08:57:55.310647  205557 api_server.go:131] duration metric: took 11.836833ms to wait for apiserver health ...
	I1123 08:57:55.310657  205557 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:57:55.316780  205557 system_pods.go:59] 8 kube-system pods found
	I1123 08:57:55.316822  205557 system_pods.go:61] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:55.316829  205557 system_pods.go:61] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:55.316834  205557 system_pods.go:61] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:55.316838  205557 system_pods.go:61] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:55.316842  205557 system_pods.go:61] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:55.316846  205557 system_pods.go:61] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:55.316849  205557 system_pods.go:61] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:55.316854  205557 system_pods.go:61] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:55.316859  205557 system_pods.go:74] duration metric: took 6.197648ms to wait for pod list to return data ...
	I1123 08:57:55.316867  205557 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:57:55.319559  205557 default_sa.go:45] found service account: "default"
	I1123 08:57:55.319588  205557 default_sa.go:55] duration metric: took 2.714097ms for default service account to be created ...
	I1123 08:57:55.319598  205557 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:57:55.324396  205557 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:55.324478  205557 system_pods.go:89] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:55.324492  205557 system_pods.go:89] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:55.324499  205557 system_pods.go:89] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:55.324504  205557 system_pods.go:89] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:55.324510  205557 system_pods.go:89] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:55.324514  205557 system_pods.go:89] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:55.324517  205557 system_pods.go:89] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:55.324523  205557 system_pods.go:89] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:55.324553  205557 retry.go:31] will retry after 265.819586ms: missing components: kube-dns
	I1123 08:57:55.596189  205557 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:55.596222  205557 system_pods.go:89] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:55.596230  205557 system_pods.go:89] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:55.596237  205557 system_pods.go:89] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:55.596241  205557 system_pods.go:89] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:55.596246  205557 system_pods.go:89] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:55.596250  205557 system_pods.go:89] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:55.596254  205557 system_pods.go:89] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:55.596263  205557 system_pods.go:89] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:55.596281  205557 retry.go:31] will retry after 289.288774ms: missing components: kube-dns
	I1123 08:57:55.890405  205557 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:55.890438  205557 system_pods.go:89] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:57:55.890446  205557 system_pods.go:89] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:55.890452  205557 system_pods.go:89] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:55.890457  205557 system_pods.go:89] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:55.890462  205557 system_pods.go:89] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:55.890466  205557 system_pods.go:89] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:55.890470  205557 system_pods.go:89] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:55.890476  205557 system_pods.go:89] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:57:55.890495  205557 retry.go:31] will retry after 460.275032ms: missing components: kube-dns
	I1123 08:57:56.354648  205557 system_pods.go:86] 8 kube-system pods found
	I1123 08:57:56.354681  205557 system_pods.go:89] "coredns-5dd5756b68-8lvr2" [785f330b-add9-400f-a67a-7b6363a1c87e] Running
	I1123 08:57:56.354689  205557 system_pods.go:89] "etcd-old-k8s-version-132097" [7e6e3c5c-fdfb-41dd-b5bd-10528eae39d6] Running
	I1123 08:57:56.354693  205557 system_pods.go:89] "kindnet-4qsxx" [460cc36d-ef1c-42af-a119-d8b5e5a667f3] Running
	I1123 08:57:56.354698  205557 system_pods.go:89] "kube-apiserver-old-k8s-version-132097" [a80a64e4-5649-4632-bf5a-45e6a21fd0fd] Running
	I1123 08:57:56.354703  205557 system_pods.go:89] "kube-controller-manager-old-k8s-version-132097" [446ffa14-8b7d-4786-8983-8f2e5ed2d1f1] Running
	I1123 08:57:56.354707  205557 system_pods.go:89] "kube-proxy-6lfm7" [4a3801eb-3ef6-464e-85cf-292e08e28bb7] Running
	I1123 08:57:56.354711  205557 system_pods.go:89] "kube-scheduler-old-k8s-version-132097" [a31c8365-52aa-4d0e-ae3d-d553a20ff782] Running
	I1123 08:57:56.354715  205557 system_pods.go:89] "storage-provisioner" [58fb64b1-807f-49e7-9c48-681619d898c6] Running
	I1123 08:57:56.354723  205557 system_pods.go:126] duration metric: took 1.035119204s to wait for k8s-apps to be running ...
	I1123 08:57:56.354734  205557 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:57:56.354790  205557 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:57:56.367765  205557 system_svc.go:56] duration metric: took 13.022413ms WaitForService to wait for kubelet
	I1123 08:57:56.367791  205557 kubeadm.go:587] duration metric: took 14.851819067s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:57:56.367810  205557 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:57:56.370552  205557 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 08:57:56.370585  205557 node_conditions.go:123] node cpu capacity is 2
	I1123 08:57:56.370598  205557 node_conditions.go:105] duration metric: took 2.78248ms to run NodePressure ...
	I1123 08:57:56.370610  205557 start.go:242] waiting for startup goroutines ...
	I1123 08:57:56.370618  205557 start.go:247] waiting for cluster config update ...
	I1123 08:57:56.370629  205557 start.go:256] writing updated cluster config ...
	I1123 08:57:56.370907  205557 ssh_runner.go:195] Run: rm -f paused
	I1123 08:57:56.374397  205557 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:57:56.378555  205557 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-8lvr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.383880  205557 pod_ready.go:94] pod "coredns-5dd5756b68-8lvr2" is "Ready"
	I1123 08:57:56.383903  205557 pod_ready.go:86] duration metric: took 5.319427ms for pod "coredns-5dd5756b68-8lvr2" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.387774  205557 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.393131  205557 pod_ready.go:94] pod "etcd-old-k8s-version-132097" is "Ready"
	I1123 08:57:56.393163  205557 pod_ready.go:86] duration metric: took 5.363924ms for pod "etcd-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.396395  205557 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.401275  205557 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-132097" is "Ready"
	I1123 08:57:56.401341  205557 pod_ready.go:86] duration metric: took 4.918249ms for pod "kube-apiserver-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.404345  205557 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.778738  205557 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-132097" is "Ready"
	I1123 08:57:56.778766  205557 pod_ready.go:86] duration metric: took 374.394138ms for pod "kube-controller-manager-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:56.980067  205557 pod_ready.go:83] waiting for pod "kube-proxy-6lfm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:57.378396  205557 pod_ready.go:94] pod "kube-proxy-6lfm7" is "Ready"
	I1123 08:57:57.378426  205557 pod_ready.go:86] duration metric: took 398.283527ms for pod "kube-proxy-6lfm7" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:57.579469  205557 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:57.978899  205557 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-132097" is "Ready"
	I1123 08:57:57.978928  205557 pod_ready.go:86] duration metric: took 399.387809ms for pod "kube-scheduler-old-k8s-version-132097" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:57:57.978942  205557 pod_ready.go:40] duration metric: took 1.604514744s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:57:58.040107  205557 start.go:625] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1123 08:57:58.043481  205557 out.go:203] 
	W1123 08:57:58.046479  205557 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1123 08:57:58.049437  205557 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1123 08:57:58.053044  205557 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-132097" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	443f41771e05c       1611cd07b61d5       9 seconds ago       Running             busybox                   0                   85abf670da81c       busybox                                          default
	d791bd8ea3407       97e04611ad434       15 seconds ago      Running             coredns                   0                   dcc7b0a9a30b4       coredns-5dd5756b68-8lvr2                         kube-system
	028b80af9b9ff       ba04bb24b9575       15 seconds ago      Running             storage-provisioner       0                   3166da96aba38       storage-provisioner                              kube-system
	262637053c8ee       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   4589180347fb2       kindnet-4qsxx                                    kube-system
	5ce0118136c4e       940f54a5bcae9       28 seconds ago      Running             kube-proxy                0                   06f0effc91df6       kube-proxy-6lfm7                                 kube-system
	e18db1e58c264       9cdd6470f48c8       49 seconds ago      Running             etcd                      0                   d406878ab6eec       etcd-old-k8s-version-132097                      kube-system
	a39215833acd6       46cc66ccc7c19       49 seconds ago      Running             kube-controller-manager   0                   faa81e5d64709       kube-controller-manager-old-k8s-version-132097   kube-system
	91ed0354df278       00543d2fe5d71       49 seconds ago      Running             kube-apiserver            0                   7e599901c226d       kube-apiserver-old-k8s-version-132097            kube-system
	2c9790d4aacf8       762dce4090c5f       49 seconds ago      Running             kube-scheduler            0                   062cc54bf41d6       kube-scheduler-old-k8s-version-132097            kube-system
	
	
	==> containerd <==
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.335218943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-8lvr2,Uid:785f330b-add9-400f-a67a-7b6363a1c87e,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcc7b0a9a30b4707a452574e5a16a4dac60a5d6943c039246545f5965671213f\""
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.344990342Z" level=info msg="CreateContainer within sandbox \"dcc7b0a9a30b4707a452574e5a16a4dac60a5d6943c039246545f5965671213f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.368459130Z" level=info msg="Container d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.382472405Z" level=info msg="CreateContainer within sandbox \"dcc7b0a9a30b4707a452574e5a16a4dac60a5d6943c039246545f5965671213f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f\""
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.389456614Z" level=info msg="StartContainer for \"d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f\""
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.390518148Z" level=info msg="connecting to shim d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f" address="unix:///run/containerd/s/cc0e752ae5af84c3942e02fc76087b3bc54562e675958bf09a37c137ca348912" protocol=ttrpc version=3
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.411234282Z" level=info msg="StartContainer for \"028b80af9b9ff071ccba6aaf458f6ef6e09b02b89f1b1e2d2504daa0b06b4ffd\" returns successfully"
	Nov 23 08:57:55 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:55.484449976Z" level=info msg="StartContainer for \"d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f\" returns successfully"
	Nov 23 08:57:58 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:58.571119104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ee3865ff-dc6d-4911-94c7-09b6024edb7c,Namespace:default,Attempt:0,}"
	Nov 23 08:57:58 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:58.630842341Z" level=info msg="connecting to shim 85abf670da81c7a87741da49cde2f83dbe22def5dc9b21d63e3048ccf846a4e2" address="unix:///run/containerd/s/1893f1c067c0462f977516acb9a1a26aa54cab5ca38da35332e88ebfed76c954" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 08:57:58 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:58.691245339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:ee3865ff-dc6d-4911-94c7-09b6024edb7c,Namespace:default,Attempt:0,} returns sandbox id \"85abf670da81c7a87741da49cde2f83dbe22def5dc9b21d63e3048ccf846a4e2\""
	Nov 23 08:57:58 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:57:58.695618499Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.904262576Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.906208648Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.908680725Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.911903120Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.912621913Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.216959869s"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.912733455Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.914630715Z" level=info msg="CreateContainer within sandbox \"85abf670da81c7a87741da49cde2f83dbe22def5dc9b21d63e3048ccf846a4e2\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.932438379Z" level=info msg="Container 443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.943426340Z" level=info msg="CreateContainer within sandbox \"85abf670da81c7a87741da49cde2f83dbe22def5dc9b21d63e3048ccf846a4e2\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1\""
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.944475517Z" level=info msg="StartContainer for \"443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1\""
	Nov 23 08:58:00 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:00.945415679Z" level=info msg="connecting to shim 443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1" address="unix:///run/containerd/s/1893f1c067c0462f977516acb9a1a26aa54cab5ca38da35332e88ebfed76c954" protocol=ttrpc version=3
	Nov 23 08:58:01 old-k8s-version-132097 containerd[756]: time="2025-11-23T08:58:01.008332789Z" level=info msg="StartContainer for \"443f41771e05c673621e113d8199bc58b51f9f7dc6f160793b9e330aa0663fe1\" returns successfully"
	Nov 23 08:58:07 old-k8s-version-132097 containerd[756]: E1123 08:58:07.395806     756 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [d791bd8ea34075f8354d26700984b29b8aff6006a62c463cc78a9923c251700f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36924 - 1492 "HINFO IN 1719261008560871152.4772551208800573690. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.031590225s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-132097
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-132097
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-132097
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_57_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:57:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-132097
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:58:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:57:59 +0000   Sun, 23 Nov 2025 08:57:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:57:59 +0000   Sun, 23 Nov 2025 08:57:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:57:59 +0000   Sun, 23 Nov 2025 08:57:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:57:59 +0000   Sun, 23 Nov 2025 08:57:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-132097
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                aede0b78-2473-402b-93c1-c2cfe1396e63
	  Boot ID:                    86d8501c-1df5-4d7e-90cb-d9ad951202c5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-8lvr2                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-old-k8s-version-132097                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-4qsxx                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-132097             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-132097    200m (10%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-6lfm7                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-132097             100m (5%)     0 (0%)      0 (0%)           0 (0%)         44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28s   kube-proxy       
	  Normal  Starting                 43s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s   kubelet          Node old-k8s-version-132097 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s   kubelet          Node old-k8s-version-132097 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s   kubelet          Node old-k8s-version-132097 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  42s   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s   node-controller  Node old-k8s-version-132097 event: Registered Node old-k8s-version-132097 in Controller
	  Normal  NodeReady                17s   kubelet          Node old-k8s-version-132097 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014670] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505841] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033008] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738583] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.057424] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:26] hrtimer: interrupt took 58442338 ns
	
	
	==> etcd [e18db1e58c2640e7866f60c485d7eab1e2a24b23fac62c295a037323425437fe] <==
	{"level":"info","ts":"2025-11-23T08:57:21.955003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-23T08:57:21.955252Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-23T08:57:21.965954Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:57:21.966169Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-23T08:57:21.963226Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T08:57:21.967095Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T08:57:21.967222Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T08:57:22.382442Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-23T08:57:22.382712Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-23T08:57:22.382851Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-23T08:57:22.382965Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-23T08:57:22.383049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T08:57:22.383141Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-23T08:57:22.383243Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-23T08:57:22.385012Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-132097 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T08:57:22.385184Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:57:22.385575Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T08:57:22.387543Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-11-23T08:57:22.391753Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:57:22.392593Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T08:57:22.395676Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T08:57:22.395279Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T08:57:22.399318Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:57:22.400444Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T08:57:22.403395Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 08:58:11 up  1:40,  0 user,  load average: 3.82, 3.96, 2.97
	Linux old-k8s-version-132097 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [262637053c8ee625a3046202afe327f1cbb2db99ae7880d1f98b14410b912320] <==
	I1123 08:57:44.423888       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:57:44.424125       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 08:57:44.424308       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:57:44.424328       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:57:44.424342       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:57:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:57:44.722383       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:57:44.722540       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:57:44.722606       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:57:44.723642       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:57:44.923167       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:57:44.923194       1 metrics.go:72] Registering metrics
	I1123 08:57:44.923246       1 controller.go:711] "Syncing nftables rules"
	I1123 08:57:54.720199       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:57:54.720264       1 main.go:301] handling current node
	I1123 08:58:04.720328       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 08:58:04.720420       1 main.go:301] handling current node
	
	
	==> kube-apiserver [91ed0354df2789371f586deb2efd76f6b353c168f5d0057d043d9428ec15a073] <==
	I1123 08:57:25.646741       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1123 08:57:25.648680       1 shared_informer.go:318] Caches are synced for configmaps
	I1123 08:57:25.650094       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 08:57:25.651731       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 08:57:25.651826       1 aggregator.go:166] initial CRD sync complete...
	I1123 08:57:25.651843       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 08:57:25.651850       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 08:57:25.651856       1 cache.go:39] Caches are synced for autoregister controller
	I1123 08:57:25.683457       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 08:57:25.691275       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:57:26.455551       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 08:57:26.462134       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 08:57:26.462425       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 08:57:27.201994       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:57:27.249248       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:57:27.384920       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 08:57:27.399717       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 08:57:27.401749       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 08:57:27.408887       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:57:27.638262       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 08:57:28.790710       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 08:57:28.805005       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 08:57:28.821606       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 08:57:41.246676       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 08:57:41.400035       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [a39215833acd606b990fcfb98f27976ee37011ce0a9e89cbfe4edd8da0204682] <==
	I1123 08:57:40.644795       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 08:57:40.648914       1 shared_informer.go:318] Caches are synced for daemon sets
	I1123 08:57:40.669568       1 shared_informer.go:318] Caches are synced for attach detach
	I1123 08:57:41.034219       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:57:41.069125       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 08:57:41.069338       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 08:57:41.253094       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 08:57:41.417459       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-4qsxx"
	I1123 08:57:41.428699       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-6lfm7"
	I1123 08:57:41.506669       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-7zb7t"
	I1123 08:57:41.549168       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-8lvr2"
	I1123 08:57:41.572108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="318.669074ms"
	I1123 08:57:41.607804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="35.196229ms"
	I1123 08:57:41.619720       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="137.134µs"
	I1123 08:57:42.827853       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 08:57:42.842789       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-7zb7t"
	I1123 08:57:42.860071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="33.403347ms"
	I1123 08:57:42.872477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.358791ms"
	I1123 08:57:42.873476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="48.181µs"
	I1123 08:57:54.831574       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.628µs"
	I1123 08:57:54.871654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="86.524µs"
	I1123 08:57:55.592477       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 08:57:56.191547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="75.086µs"
	I1123 08:57:56.232618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.087889ms"
	I1123 08:57:56.232830       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.526µs"
	
	
	==> kube-proxy [5ce0118136c4ec2de35306557d9fc06af6aaa216d5785486748a430eb80abf06] <==
	I1123 08:57:42.278043       1 server_others.go:69] "Using iptables proxy"
	I1123 08:57:42.295124       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1123 08:57:42.346235       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:57:42.348370       1 server_others.go:152] "Using iptables Proxier"
	I1123 08:57:42.348415       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 08:57:42.348424       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 08:57:42.348449       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 08:57:42.348648       1 server.go:846] "Version info" version="v1.28.0"
	I1123 08:57:42.348666       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:57:42.349577       1 config.go:188] "Starting service config controller"
	I1123 08:57:42.349617       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 08:57:42.349642       1 config.go:97] "Starting endpoint slice config controller"
	I1123 08:57:42.349650       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 08:57:42.354459       1 config.go:315] "Starting node config controller"
	I1123 08:57:42.354505       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 08:57:42.449923       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 08:57:42.449978       1 shared_informer.go:318] Caches are synced for service config
	I1123 08:57:42.455480       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [2c9790d4aacf8186f2b6020db5c0fcf23b94a4dd352884aa58bf5601213b7764] <==
	W1123 08:57:26.643688       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 08:57:26.643732       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 08:57:26.643702       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1123 08:57:26.643870       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1123 08:57:26.647635       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 08:57:26.647669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1123 08:57:26.647856       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 08:57:26.647879       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 08:57:26.648580       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1123 08:57:26.648648       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1123 08:57:26.652463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1123 08:57:26.652498       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1123 08:57:26.653338       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1123 08:57:26.653365       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 08:57:26.653581       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 08:57:26.653634       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 08:57:26.653731       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 08:57:26.653748       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 08:57:26.653800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1123 08:57:26.653814       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1123 08:57:26.653902       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 08:57:26.653916       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 08:57:26.653948       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 08:57:26.653962       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I1123 08:57:27.838013       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 08:57:40 old-k8s-version-132097 kubelet[1531]: I1123 08:57:40.615021    1531 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.438219    1531 topology_manager.go:215] "Topology Admit Handler" podUID="460cc36d-ef1c-42af-a119-d8b5e5a667f3" podNamespace="kube-system" podName="kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.458829    1531 topology_manager.go:215] "Topology Admit Handler" podUID="4a3801eb-3ef6-464e-85cf-292e08e28bb7" podNamespace="kube-system" podName="kube-proxy-6lfm7"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.576882    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a3801eb-3ef6-464e-85cf-292e08e28bb7-kube-proxy\") pod \"kube-proxy-6lfm7\" (UID: \"4a3801eb-3ef6-464e-85cf-292e08e28bb7\") " pod="kube-system/kube-proxy-6lfm7"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577180    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a3801eb-3ef6-464e-85cf-292e08e28bb7-xtables-lock\") pod \"kube-proxy-6lfm7\" (UID: \"4a3801eb-3ef6-464e-85cf-292e08e28bb7\") " pod="kube-system/kube-proxy-6lfm7"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577426    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/460cc36d-ef1c-42af-a119-d8b5e5a667f3-xtables-lock\") pod \"kindnet-4qsxx\" (UID: \"460cc36d-ef1c-42af-a119-d8b5e5a667f3\") " pod="kube-system/kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577602    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/460cc36d-ef1c-42af-a119-d8b5e5a667f3-lib-modules\") pod \"kindnet-4qsxx\" (UID: \"460cc36d-ef1c-42af-a119-d8b5e5a667f3\") " pod="kube-system/kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577765    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a3801eb-3ef6-464e-85cf-292e08e28bb7-lib-modules\") pod \"kube-proxy-6lfm7\" (UID: \"4a3801eb-3ef6-464e-85cf-292e08e28bb7\") " pod="kube-system/kube-proxy-6lfm7"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.577940    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/460cc36d-ef1c-42af-a119-d8b5e5a667f3-cni-cfg\") pod \"kindnet-4qsxx\" (UID: \"460cc36d-ef1c-42af-a119-d8b5e5a667f3\") " pod="kube-system/kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.578101    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjpf9\" (UniqueName: \"kubernetes.io/projected/460cc36d-ef1c-42af-a119-d8b5e5a667f3-kube-api-access-rjpf9\") pod \"kindnet-4qsxx\" (UID: \"460cc36d-ef1c-42af-a119-d8b5e5a667f3\") " pod="kube-system/kindnet-4qsxx"
	Nov 23 08:57:41 old-k8s-version-132097 kubelet[1531]: I1123 08:57:41.578330    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jnhp\" (UniqueName: \"kubernetes.io/projected/4a3801eb-3ef6-464e-85cf-292e08e28bb7-kube-api-access-7jnhp\") pod \"kube-proxy-6lfm7\" (UID: \"4a3801eb-3ef6-464e-85cf-292e08e28bb7\") " pod="kube-system/kube-proxy-6lfm7"
	Nov 23 08:57:43 old-k8s-version-132097 kubelet[1531]: I1123 08:57:43.151928    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6lfm7" podStartSLOduration=2.15188228 podCreationTimestamp="2025-11-23 08:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:57:43.151723927 +0000 UTC m=+14.389401589" watchObservedRunningTime="2025-11-23 08:57:43.15188228 +0000 UTC m=+14.389559925"
	Nov 23 08:57:45 old-k8s-version-132097 kubelet[1531]: I1123 08:57:45.193171    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-4qsxx" podStartSLOduration=2.184281555 podCreationTimestamp="2025-11-23 08:57:41 +0000 UTC" firstStartedPulling="2025-11-23 08:57:42.069696835 +0000 UTC m=+13.307374463" lastFinishedPulling="2025-11-23 08:57:44.078534141 +0000 UTC m=+15.316211770" observedRunningTime="2025-11-23 08:57:45.190347369 +0000 UTC m=+16.428025023" watchObservedRunningTime="2025-11-23 08:57:45.193118862 +0000 UTC m=+16.430796499"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.786544    1531 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.830913    1531 topology_manager.go:215] "Topology Admit Handler" podUID="785f330b-add9-400f-a67a-7b6363a1c87e" podNamespace="kube-system" podName="coredns-5dd5756b68-8lvr2"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.834405    1531 topology_manager.go:215] "Topology Admit Handler" podUID="58fb64b1-807f-49e7-9c48-681619d898c6" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.874977    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcns6\" (UniqueName: \"kubernetes.io/projected/58fb64b1-807f-49e7-9c48-681619d898c6-kube-api-access-jcns6\") pod \"storage-provisioner\" (UID: \"58fb64b1-807f-49e7-9c48-681619d898c6\") " pod="kube-system/storage-provisioner"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.875052    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/785f330b-add9-400f-a67a-7b6363a1c87e-config-volume\") pod \"coredns-5dd5756b68-8lvr2\" (UID: \"785f330b-add9-400f-a67a-7b6363a1c87e\") " pod="kube-system/coredns-5dd5756b68-8lvr2"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.875094    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/58fb64b1-807f-49e7-9c48-681619d898c6-tmp\") pod \"storage-provisioner\" (UID: \"58fb64b1-807f-49e7-9c48-681619d898c6\") " pod="kube-system/storage-provisioner"
	Nov 23 08:57:54 old-k8s-version-132097 kubelet[1531]: I1123 08:57:54.875132    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfqcr\" (UniqueName: \"kubernetes.io/projected/785f330b-add9-400f-a67a-7b6363a1c87e-kube-api-access-dfqcr\") pod \"coredns-5dd5756b68-8lvr2\" (UID: \"785f330b-add9-400f-a67a-7b6363a1c87e\") " pod="kube-system/coredns-5dd5756b68-8lvr2"
	Nov 23 08:57:56 old-k8s-version-132097 kubelet[1531]: I1123 08:57:56.213611    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-8lvr2" podStartSLOduration=15.213558978 podCreationTimestamp="2025-11-23 08:57:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:57:56.192056283 +0000 UTC m=+27.429733920" watchObservedRunningTime="2025-11-23 08:57:56.213558978 +0000 UTC m=+27.451236607"
	Nov 23 08:57:58 old-k8s-version-132097 kubelet[1531]: I1123 08:57:58.264381    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.264335314 podCreationTimestamp="2025-11-23 08:57:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 08:57:56.246040076 +0000 UTC m=+27.483717705" watchObservedRunningTime="2025-11-23 08:57:58.264335314 +0000 UTC m=+29.502012943"
	Nov 23 08:57:58 old-k8s-version-132097 kubelet[1531]: I1123 08:57:58.264678    1531 topology_manager.go:215] "Topology Admit Handler" podUID="ee3865ff-dc6d-4911-94c7-09b6024edb7c" podNamespace="default" podName="busybox"
	Nov 23 08:57:58 old-k8s-version-132097 kubelet[1531]: I1123 08:57:58.299576    1531 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9r4f\" (UniqueName: \"kubernetes.io/projected/ee3865ff-dc6d-4911-94c7-09b6024edb7c-kube-api-access-b9r4f\") pod \"busybox\" (UID: \"ee3865ff-dc6d-4911-94c7-09b6024edb7c\") " pod="default/busybox"
	Nov 23 08:58:01 old-k8s-version-132097 kubelet[1531]: I1123 08:58:01.221466    1531 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.000997351 podCreationTimestamp="2025-11-23 08:57:58 +0000 UTC" firstStartedPulling="2025-11-23 08:57:58.692634622 +0000 UTC m=+29.930312259" lastFinishedPulling="2025-11-23 08:58:00.913057446 +0000 UTC m=+32.150735083" observedRunningTime="2025-11-23 08:58:01.221072332 +0000 UTC m=+32.458749960" watchObservedRunningTime="2025-11-23 08:58:01.221420175 +0000 UTC m=+32.459097813"
	
	
	==> storage-provisioner [028b80af9b9ff071ccba6aaf458f6ef6e09b02b89f1b1e2d2504daa0b06b4ffd] <==
	I1123 08:57:55.420215       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 08:57:55.444955       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 08:57:55.445179       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 08:57:55.459038       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 08:57:55.462063       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-132097_336630f9-1c2a-4210-93a8-d3736b9ac669!
	I1123 08:57:55.464703       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3a8c1278-4d51-4a69-a242-d47431dca2ba", APIVersion:"v1", ResourceVersion:"410", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-132097_336630f9-1c2a-4210-93a8-d3736b9ac669 became leader
	I1123 08:57:55.563109       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-132097_336630f9-1c2a-4210-93a8-d3736b9ac669!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-132097 -n old-k8s-version-132097
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-132097 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (13.90s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-118762 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5c2314ab-27c6-4441-889f-af501dd53560] Pending
helpers_test.go:352: "busybox" [5c2314ab-27c6-4441-889f-af501dd53560] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1123 09:01:02.172198    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [5c2314ab-27c6-4441-889f-af501dd53560] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004128201s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-118762 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-118762
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-118762:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c",
	        "Created": "2025-11-23T08:59:39.122301538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 215560,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:59:39.190221667Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c/hosts",
	        "LogPath": "/var/lib/docker/containers/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c-json.log",
	        "Name": "/default-k8s-diff-port-118762",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-118762:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-118762",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c",
	                "LowerDir": "/var/lib/docker/overlay2/f999f3409882cb4ddc869e7d40ae0cbb7d25319a3657e618b3d903ead519ef2d-init/diff:/var/lib/docker/overlay2/e1de88c117c0c773e1fa636243190fd97eadaa5a8e1ee08fd53827cbac767d35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f999f3409882cb4ddc869e7d40ae0cbb7d25319a3657e618b3d903ead519ef2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f999f3409882cb4ddc869e7d40ae0cbb7d25319a3657e618b3d903ead519ef2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f999f3409882cb4ddc869e7d40ae0cbb7d25319a3657e618b3d903ead519ef2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-118762",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-118762/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-118762",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-118762",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-118762",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "384d6bbad0e8ca5751052a1b67261e1cd19d59c71672f2d31cbbeca0bdf614f9",
	            "SandboxKey": "/var/run/docker/netns/384d6bbad0e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-118762": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:91:76:b6:ac:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2b6d77ec61e96b127fbc34ebc64c03e7e497d95e117654f3d1a0ea3bd4bc6193",
	                    "EndpointID": "194f8cb7e543614697d6074a54a3b0fd34fcc4ff0587d794942dd4133f848483",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-118762",
	                        "9b8dfb0e1800"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-118762 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-118762 logs -n 25: (1.241610441s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-694698 sudo crio config                                                                                                                                                                                                                   │ cilium-694698                │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p cilium-694698                                                                                                                                                                                                                                    │ cilium-694698                │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p force-systemd-env-023309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p kubernetes-upgrade-291582                                                                                                                                                                                                                        │ kubernetes-upgrade-291582    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ force-systemd-env-023309 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p force-systemd-env-023309                                                                                                                                                                                                                         │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-options-886452 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ cert-options-886452 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ -p cert-options-886452 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-options-886452                                                                                                                                                                                                                              │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-132097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ stop    │ -p old-k8s-version-132097 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-132097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ old-k8s-version-132097 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p old-k8s-version-132097 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ unpause │ -p old-k8s-version-132097 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p old-k8s-version-132097                                                                                                                                                                                                                           │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p old-k8s-version-132097                                                                                                                                                                                                                           │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p default-k8s-diff-port-118762 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p cert-expiration-918102                                                                                                                                                                                                                           │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:59:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:59:40.577485  216074 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:59:40.577691  216074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:40.577718  216074 out.go:374] Setting ErrFile to fd 2...
	I1123 08:59:40.577739  216074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:40.578089  216074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:59:40.578573  216074 out.go:368] Setting JSON to false
	I1123 08:59:40.579525  216074 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6133,"bootTime":1763882248,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:59:40.579625  216074 start.go:143] virtualization:  
	I1123 08:59:40.583259  216074 out.go:179] * [embed-certs-672503] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:59:40.587830  216074 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:59:40.587967  216074 notify.go:221] Checking for updates...
	I1123 08:59:40.594558  216074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:59:40.597788  216074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:59:40.601027  216074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:59:40.604233  216074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:59:40.607539  216074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:59:40.611140  216074 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:40.611247  216074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:59:40.656282  216074 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:59:40.656413  216074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:40.752458  216074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:59:40.738300735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:40.752566  216074 docker.go:319] overlay module found
	I1123 08:59:40.756622  216074 out.go:179] * Using the docker driver based on user configuration
	I1123 08:59:40.759788  216074 start.go:309] selected driver: docker
	I1123 08:59:40.759810  216074 start.go:927] validating driver "docker" against <nil>
	I1123 08:59:40.759823  216074 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:59:40.760559  216074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:40.840879  216074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-23 08:59:40.831791559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:40.841036  216074 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:59:40.841265  216074 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:59:40.844487  216074 out.go:179] * Using Docker driver with root privileges
	I1123 08:59:40.847551  216074 cni.go:84] Creating CNI manager for ""
	I1123 08:59:40.847624  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:40.847640  216074 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:59:40.847726  216074 start.go:353] cluster config:
	{Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:40.850947  216074 out.go:179] * Starting "embed-certs-672503" primary control-plane node in "embed-certs-672503" cluster
	I1123 08:59:40.853960  216074 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:59:40.856924  216074 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:59:40.859875  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:40.859924  216074 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:59:40.859933  216074 cache.go:65] Caching tarball of preloaded images
	I1123 08:59:40.859968  216074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:59:40.860013  216074 preload.go:238] Found /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:59:40.860024  216074 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:59:40.860143  216074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json ...
	I1123 08:59:40.860163  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json: {Name:mkb81d39d58a71dac5e98d24c241cff9b78e273e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:40.879736  216074 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:59:40.879759  216074 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:59:40.879779  216074 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:59:40.879808  216074 start.go:360] acquireMachinesLock for embed-certs-672503: {Name:mk52b3d46d7a43264b4677c9fc6abfc0706853fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:59:40.879915  216074 start.go:364] duration metric: took 86.869µs to acquireMachinesLock for "embed-certs-672503"
	I1123 08:59:40.879944  216074 start.go:93] Provisioning new machine with config: &{Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:59:40.880019  216074 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:59:39.039954  214550 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-118762:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.007752645s)
	I1123 08:59:39.039991  214550 kic.go:203] duration metric: took 5.007913738s to extract preloaded images to volume ...
	W1123 08:59:39.040149  214550 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:39.040271  214550 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:39.103132  214550 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-118762 --name default-k8s-diff-port-118762 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-118762 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-118762 --network default-k8s-diff-port-118762 --ip 192.168.85.2 --volume default-k8s-diff-port-118762:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:39.606571  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Running}}
	I1123 08:59:39.652908  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:39.675600  214550 cli_runner.go:164] Run: docker exec default-k8s-diff-port-118762 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:39.805153  214550 oci.go:144] the created container "default-k8s-diff-port-118762" has a running status.
	I1123 08:59:39.805181  214550 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa...
	I1123 08:59:40.603002  214550 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:40.646836  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:40.670926  214550 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:40.670945  214550 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-118762 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:40.744487  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:40.770445  214550 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:40.770539  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:40.791316  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:40.791758  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:40.791772  214550 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:40.792437  214550 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51880->127.0.0.1:33064: read: connection reset by peer
	I1123 08:59:40.883578  216074 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:59:40.883819  216074 start.go:159] libmachine.API.Create for "embed-certs-672503" (driver="docker")
	I1123 08:59:40.883864  216074 client.go:173] LocalClient.Create starting
	I1123 08:59:40.883946  216074 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem
	I1123 08:59:40.883982  216074 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:40.884002  216074 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:40.884067  216074 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem
	I1123 08:59:40.884090  216074 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:40.884109  216074 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:40.884452  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:59:40.900264  216074 cli_runner.go:211] docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:59:40.900362  216074 network_create.go:284] running [docker network inspect embed-certs-672503] to gather additional debugging logs...
	I1123 08:59:40.900388  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503
	W1123 08:59:40.916918  216074 cli_runner.go:211] docker network inspect embed-certs-672503 returned with exit code 1
	I1123 08:59:40.916950  216074 network_create.go:287] error running [docker network inspect embed-certs-672503]: docker network inspect embed-certs-672503: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-672503 not found
	I1123 08:59:40.916965  216074 network_create.go:289] output of [docker network inspect embed-certs-672503]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-672503 not found
	
	** /stderr **
	I1123 08:59:40.917065  216074 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:40.933652  216074 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ab12b2c3b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:c9:6d:7b:80:76} reservation:<nil>}
	I1123 08:59:40.933989  216074 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7f5e4a52a57c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:1a:79:b2:02:66} reservation:<nil>}
	I1123 08:59:40.934307  216074 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed031858d624 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:47:7d:04:56:4a} reservation:<nil>}
	I1123 08:59:40.934717  216074 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7270}
	I1123 08:59:40.934741  216074 network_create.go:124] attempt to create docker network embed-certs-672503 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:59:40.934796  216074 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-672503 embed-certs-672503
	I1123 08:59:40.992310  216074 network_create.go:108] docker network embed-certs-672503 192.168.76.0/24 created
	I1123 08:59:40.992345  216074 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-672503" container
	I1123 08:59:40.992424  216074 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:59:41.010086  216074 cli_runner.go:164] Run: docker volume create embed-certs-672503 --label name.minikube.sigs.k8s.io=embed-certs-672503 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:59:41.028903  216074 oci.go:103] Successfully created a docker volume embed-certs-672503
	I1123 08:59:41.029006  216074 cli_runner.go:164] Run: docker run --rm --name embed-certs-672503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-672503 --entrypoint /usr/bin/test -v embed-certs-672503:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:59:41.597394  216074 oci.go:107] Successfully prepared a docker volume embed-certs-672503
	I1123 08:59:41.597456  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:41.597467  216074 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:59:41.597532  216074 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-672503:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:59:43.963549  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-118762
	
	I1123 08:59:43.963629  214550 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-118762"
	I1123 08:59:43.963730  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:43.982067  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:43.982376  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:43.982388  214550 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-118762 && echo "default-k8s-diff-port-118762" | sudo tee /etc/hostname
	I1123 08:59:44.162438  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-118762
	
	I1123 08:59:44.162524  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.184402  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:44.184717  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:44.184743  214550 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-118762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-118762/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-118762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:44.387688  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:44.387725  214550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:59:44.387751  214550 ubuntu.go:190] setting up certificates
	I1123 08:59:44.387761  214550 provision.go:84] configureAuth start
	I1123 08:59:44.387823  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.406977  214550 provision.go:143] copyHostCerts
	I1123 08:59:44.407043  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:59:44.407056  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:59:44.407135  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:59:44.407247  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:59:44.407259  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:59:44.407287  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:59:44.407420  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:59:44.407449  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:59:44.407501  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:59:44.407571  214550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-118762 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-118762 localhost minikube]
	I1123 08:59:44.485276  214550 provision.go:177] copyRemoteCerts
	I1123 08:59:44.485399  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:44.485475  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.502836  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.611676  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 08:59:44.631601  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:59:44.649182  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:59:44.666321  214550 provision.go:87] duration metric: took 278.533612ms to configureAuth
	I1123 08:59:44.666344  214550 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:44.666518  214550 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:44.666526  214550 machine.go:97] duration metric: took 3.896062717s to provisionDockerMachine
	I1123 08:59:44.666532  214550 client.go:176] duration metric: took 11.505696925s to LocalClient.Create
	I1123 08:59:44.666546  214550 start.go:167] duration metric: took 11.505763117s to libmachine.API.Create "default-k8s-diff-port-118762"
	I1123 08:59:44.666552  214550 start.go:293] postStartSetup for "default-k8s-diff-port-118762" (driver="docker")
	I1123 08:59:44.666561  214550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:44.666612  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:44.666651  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.683801  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.791506  214550 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:44.795326  214550 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:44.795375  214550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:44.795403  214550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:59:44.795479  214550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:59:44.795605  214550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:59:44.795716  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:44.804406  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:44.824224  214550 start.go:296] duration metric: took 157.657779ms for postStartSetup
	I1123 08:59:44.824627  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.842791  214550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/config.json ...
	I1123 08:59:44.845272  214550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:44.845334  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.870817  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.973574  214550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:44.978835  214550 start.go:128] duration metric: took 11.821803269s to createHost
	I1123 08:59:44.978859  214550 start.go:83] releasing machines lock for "default-k8s-diff-port-118762", held for 11.821970245s
	I1123 08:59:44.978934  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.996375  214550 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:44.996410  214550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:44.996429  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.997293  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:45.019323  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:45.019748  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:45.266005  214550 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:45.276798  214550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:45.286312  214550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:45.286509  214550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:45.400996  214550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:45.401066  214550 start.go:496] detecting cgroup driver to use...
	I1123 08:59:45.401106  214550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:45.401166  214550 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:59:45.416740  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:59:45.430174  214550 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:45.430277  214550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:45.449266  214550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:45.468575  214550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:45.593366  214550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:45.727407  214550 docker.go:234] disabling docker service ...
	I1123 08:59:45.727524  214550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:45.750566  214550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:45.763685  214550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:45.882473  214550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:46.015128  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:46.029863  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:46.051000  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:59:46.067292  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:59:46.081288  214550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:59:46.081404  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:59:46.100139  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:46.120619  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:59:46.133469  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:46.142574  214550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:46.152921  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:59:46.164064  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:59:46.173191  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:59:46.188341  214550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:46.201637  214550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:46.214012  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:46.386854  214550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:59:46.574017  214550 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:59:46.574082  214550 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:59:46.590863  214550 start.go:564] Will wait 60s for crictl version
	I1123 08:59:46.590924  214550 ssh_runner.go:195] Run: which crictl
	I1123 08:59:46.596219  214550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:46.641889  214550 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:59:46.641953  214550 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:46.715861  214550 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:46.799546  214550 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:59:46.802513  214550 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-118762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:46.830038  214550 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:46.834203  214550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:46.850678  214550 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-118762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:46.850809  214550 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:46.850885  214550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:46.899220  214550 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:46.899242  214550 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:59:46.899304  214550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:46.940637  214550 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:46.940658  214550 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:46.940666  214550 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 08:59:46.940760  214550 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-118762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:46.941123  214550 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:59:47.001942  214550 cni.go:84] Creating CNI manager for ""
	I1123 08:59:47.001962  214550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:47.001977  214550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:59:47.002000  214550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-118762 NodeName:default-k8s-diff-port-118762 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:47.002115  214550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-118762"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:47.002179  214550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:47.020644  214550 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:47.020704  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:47.037002  214550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 08:59:47.055802  214550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:47.079429  214550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1123 08:59:47.092521  214550 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:47.096917  214550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:47.106392  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:47.305463  214550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:47.337722  214550 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762 for IP: 192.168.85.2
	I1123 08:59:47.337739  214550 certs.go:195] generating shared ca certs ...
	I1123 08:59:47.337754  214550 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.337885  214550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:59:47.337928  214550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:59:47.337936  214550 certs.go:257] generating profile certs ...
	I1123 08:59:47.337988  214550 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key
	I1123 08:59:47.337997  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt with IP's: []
	I1123 08:59:47.952908  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt ...
	I1123 08:59:47.952991  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt: {Name:mkf95cd7f0813a939fc5a10b868018298b21adb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.953216  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key ...
	I1123 08:59:47.953254  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key: {Name:mkf9a2acc2c42bd0a0cf1a1f2787b6cd46ba4f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.953415  214550 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca
	I1123 08:59:47.953453  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:59:48.203697  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca ...
	I1123 08:59:48.203769  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca: {Name:mk05909547f3239afc9409b846b3fb486118a441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.203987  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca ...
	I1123 08:59:48.204023  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca: {Name:mkec035b62be2e775b2f0c85ff409f77aebf0a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.204156  214550 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt
	I1123 08:59:48.204271  214550 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key
	I1123 08:59:48.204380  214550 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key
	I1123 08:59:48.204418  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt with IP's: []
	I1123 08:59:48.359177  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt ...
	I1123 08:59:48.359211  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt: {Name:mkf91279fb6f4fe072e258fdea87868d2840f420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.359412  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key ...
	I1123 08:59:48.359429  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key: {Name:mkbf74023435808035706f9a2ad6638168a8a889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.359663  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:59:48.359708  214550 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:48.359723  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:48.359753  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:59:48.359783  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:48.359810  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:59:48.359858  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:48.360416  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:48.379912  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:59:48.398946  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:48.417150  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:59:48.434559  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 08:59:48.452066  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:59:48.470350  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:48.488326  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:59:48.506336  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:59:48.524422  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:59:48.541642  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:48.559509  214550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:48.572933  214550 ssh_runner.go:195] Run: openssl version
	I1123 08:59:48.579412  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:59:48.588035  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.591879  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.591946  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.633205  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:48.641796  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:48.650209  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.654132  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.654249  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.695982  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:48.704319  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:59:48.712849  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.716712  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.716781  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.757938  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:48.766377  214550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:48.769975  214550 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:48.770030  214550 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-118762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:48.770114  214550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:48.770174  214550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:48.795754  214550 cri.go:89] found id: ""
	I1123 08:59:48.795881  214550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:48.803757  214550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:48.811647  214550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:48.811743  214550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:48.819712  214550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:48.819733  214550 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:48.819805  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 08:59:48.827458  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:48.827560  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:48.835278  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 08:59:48.843241  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:48.843395  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:48.850790  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 08:59:48.859021  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:48.859145  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:48.866723  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 08:59:48.874202  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:48.874315  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:48.882081  214550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:48.932250  214550 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:48.932626  214550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:48.968464  214550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:48.968571  214550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:48.968634  214550 kubeadm.go:319] OS: Linux
	I1123 08:59:48.968710  214550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:48.968779  214550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:48.968852  214550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:48.968949  214550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:48.969029  214550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:48.969104  214550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:48.969191  214550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:48.969263  214550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:48.969334  214550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:49.039395  214550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:49.039547  214550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:49.039694  214550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:49.045139  214550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:46.061340  216074 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-672503:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.463759827s)
	I1123 08:59:46.061369  216074 kic.go:203] duration metric: took 4.463899193s to extract preloaded images to volume ...
	W1123 08:59:46.061515  216074 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:46.061700  216074 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:46.159063  216074 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-672503 --name embed-certs-672503 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-672503 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-672503 --network embed-certs-672503 --ip 192.168.76.2 --volume embed-certs-672503:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:46.530738  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Running}}
	I1123 08:59:46.558782  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:46.582800  216074 cli_runner.go:164] Run: docker exec embed-certs-672503 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:46.646806  216074 oci.go:144] the created container "embed-certs-672503" has a running status.
	I1123 08:59:46.646847  216074 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa...
	I1123 08:59:46.847783  216074 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:46.880288  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:46.917106  216074 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:46.917131  216074 kic_runner.go:114] Args: [docker exec --privileged embed-certs-672503 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:46.987070  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:47.019780  216074 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:47.019874  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:47.051570  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:47.051918  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:47.051935  216074 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:47.052575  216074 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:59:50.211545  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-672503
	
	I1123 08:59:50.211595  216074 ubuntu.go:182] provisioning hostname "embed-certs-672503"
	I1123 08:59:50.211673  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:50.237002  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:50.237319  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:50.237337  216074 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-672503 && echo "embed-certs-672503" | sudo tee /etc/hostname
	I1123 08:59:50.436539  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-672503
	
	I1123 08:59:50.436687  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:50.465709  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:50.466029  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:50.466045  216074 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672503' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672503/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672503' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:49.051452  214550 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:49.051585  214550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:49.051703  214550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:50.049674  214550 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:50.094855  214550 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:50.781521  214550 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:51.007002  214550 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:51.586516  214550 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:51.587407  214550 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-118762 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:52.294730  214550 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:52.295126  214550 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-118762 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:50.619868  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:50.619905  216074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:59:50.619926  216074 ubuntu.go:190] setting up certificates
	I1123 08:59:50.619937  216074 provision.go:84] configureAuth start
	I1123 08:59:50.620004  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:50.645393  216074 provision.go:143] copyHostCerts
	I1123 08:59:50.645466  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:59:50.645475  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:59:50.645553  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:59:50.645639  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:59:50.645644  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:59:50.645669  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:59:50.645724  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:59:50.645729  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:59:50.645751  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:59:50.645795  216074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672503 san=[127.0.0.1 192.168.76.2 embed-certs-672503 localhost minikube]
	I1123 08:59:51.127888  216074 provision.go:177] copyRemoteCerts
	I1123 08:59:51.127960  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:51.128004  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.153368  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.284623  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:59:51.314621  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:59:51.335720  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:59:51.355451  216074 provision.go:87] duration metric: took 735.481705ms to configureAuth
	I1123 08:59:51.355533  216074 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:51.355763  216074 config.go:182] Loaded profile config "embed-certs-672503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:51.355791  216074 machine.go:97] duration metric: took 4.335986452s to provisionDockerMachine
	I1123 08:59:51.355815  216074 client.go:176] duration metric: took 10.471938723s to LocalClient.Create
	I1123 08:59:51.355856  216074 start.go:167] duration metric: took 10.472037333s to libmachine.API.Create "embed-certs-672503"
	I1123 08:59:51.355949  216074 start.go:293] postStartSetup for "embed-certs-672503" (driver="docker")
	I1123 08:59:51.355976  216074 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:51.356061  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:51.356134  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.375632  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.492356  216074 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:51.496551  216074 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:51.496580  216074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:51.496592  216074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:59:51.496645  216074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:59:51.496721  216074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:59:51.496826  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:51.505195  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:51.525735  216074 start.go:296] duration metric: took 169.754775ms for postStartSetup
	I1123 08:59:51.526206  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:51.546243  216074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json ...
	I1123 08:59:51.546511  216074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:51.546553  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.568894  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.680931  216074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:51.686143  216074 start.go:128] duration metric: took 10.806110424s to createHost
	I1123 08:59:51.686171  216074 start.go:83] releasing machines lock for "embed-certs-672503", held for 10.806242996s
	I1123 08:59:51.686257  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:51.705486  216074 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:51.705573  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.705949  216074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:51.706024  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.760593  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.767588  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.883448  216074 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:51.991493  216074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:51.996626  216074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:51.996703  216074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:52.044663  216074 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:52.044689  216074 start.go:496] detecting cgroup driver to use...
	I1123 08:59:52.044721  216074 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:52.044781  216074 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:59:52.061494  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:59:52.076189  216074 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:52.076260  216074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:52.094291  216074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:52.114994  216074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:52.292895  216074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:52.481817  216074 docker.go:234] disabling docker service ...
	I1123 08:59:52.481931  216074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:52.508317  216074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:52.526364  216074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:52.700213  216074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:52.897094  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:52.915331  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:52.931211  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:59:52.946225  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:59:52.956101  216074 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:59:52.956226  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:59:52.965762  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:52.975341  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:59:52.985192  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:52.994955  216074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:53.010410  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:59:53.027207  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:59:53.042077  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:59:53.054424  216074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:53.063874  216074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:53.072557  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.226737  216074 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:59:53.443692  216074 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:59:53.443892  216074 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:59:53.448833  216074 start.go:564] Will wait 60s for crictl version
	I1123 08:59:53.448947  216074 ssh_runner.go:195] Run: which crictl
	I1123 08:59:53.453157  216074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:53.486128  216074 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:59:53.486258  216074 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:53.513131  216074 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:53.540090  216074 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:59:53.543140  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:53.564398  216074 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:53.569921  216074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:53.584791  216074 kubeadm.go:884] updating cluster {Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:53.584953  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:53.585060  216074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:53.625666  216074 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:53.625695  216074 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:59:53.625759  216074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:53.653757  216074 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:53.653781  216074 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:53.653789  216074 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 08:59:53.653881  216074 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-672503 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:53.653948  216074 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:59:53.696072  216074 cni.go:84] Creating CNI manager for ""
	I1123 08:59:53.696098  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:53.696113  216074 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:59:53.696140  216074 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672503 NodeName:embed-certs-672503 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:53.696260  216074 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-672503"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:53.696337  216074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:53.705716  216074 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:53.705795  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:53.718287  216074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 08:59:53.737046  216074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:53.760149  216074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1123 08:59:53.778487  216074 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:53.782565  216074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:53.792649  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.947067  216074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:53.969434  216074 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503 for IP: 192.168.76.2
	I1123 08:59:53.969452  216074 certs.go:195] generating shared ca certs ...
	I1123 08:59:53.969468  216074 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:53.969604  216074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:59:53.969644  216074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:59:53.969650  216074 certs.go:257] generating profile certs ...
	I1123 08:59:53.969704  216074 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key
	I1123 08:59:53.969718  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt with IP's: []
	I1123 08:59:54.209900  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt ...
	I1123 08:59:54.209965  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt: {Name:mk5c525ca71ddd2fe2c7f6b3ca8599f23905a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.210184  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key ...
	I1123 08:59:54.210197  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key: {Name:mk8943be44317db4dff6c1e7eaf6a19a57aa6c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.210284  216074 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae
	I1123 08:59:54.210296  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:59:54.801069  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae ...
	I1123 08:59:54.801096  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae: {Name:mk380799870e5ea7b7c67a4d865af58b1de5aef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.801278  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae ...
	I1123 08:59:54.801290  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae: {Name:mk102df1c6315a508518783bccf3cb2f81c38779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.801364  216074 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt
	I1123 08:59:54.801439  216074 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key
	I1123 08:59:54.801491  216074 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key
	I1123 08:59:54.801507  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt with IP's: []
	I1123 08:59:55.253694  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt ...
	I1123 08:59:55.253767  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt: {Name:mkdf06b6c921783e84858386a11a6aa335d63967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:55.253999  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key ...
	I1123 08:59:55.254013  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key: {Name:mk979f2bcf5527fe8ab1fb441ce8c10881831a69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:55.254199  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:59:55.254240  216074 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:55.254249  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:55.254277  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:59:55.254303  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:55.254368  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:59:55.254413  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:55.255001  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:55.275757  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:59:55.301850  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:55.327043  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:59:55.356120  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:59:55.379337  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:59:55.403251  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:55.432903  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:59:55.452955  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:55.477346  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:59:55.510351  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:59:55.531366  216074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:55.546185  216074 ssh_runner.go:195] Run: openssl version
	I1123 08:59:55.552895  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:59:55.562322  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.566546  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.566661  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.608819  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:55.617792  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:59:55.626621  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.631031  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.631147  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.673213  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:55.682467  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:55.691629  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.696005  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.696116  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.737391  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:55.746485  216074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:55.750669  216074 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:55.750779  216074 kubeadm.go:401] StartCluster: {Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:55.750882  216074 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:55.750971  216074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:55.781886  216074 cri.go:89] found id: ""
	I1123 08:59:55.782008  216074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:55.792128  216074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:55.801015  216074 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:55.801120  216074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:55.811498  216074 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:55.811567  216074 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:55.811651  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:59:55.820390  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:55.820489  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:55.828204  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:59:55.837261  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:55.837355  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:55.845286  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:59:55.854064  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:55.854174  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:55.861833  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:59:55.870496  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:55.870610  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:55.878638  216074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:55.935971  216074 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:55.937587  216074 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:56.004559  216074 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:56.004761  216074 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:56.004834  216074 kubeadm.go:319] OS: Linux
	I1123 08:59:56.004912  216074 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:56.004998  216074 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:56.005083  216074 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:56.005163  216074 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:56.005244  216074 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:56.005326  216074 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:56.005405  216074 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:56.005488  216074 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:56.005568  216074 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:56.119904  216074 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:56.120070  216074 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:56.120207  216074 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:56.130630  216074 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:54.179851  214550 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:59:55.466764  214550 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:55.672141  214550 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:59:55.672731  214550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:59:55.836881  214550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:59:56.018357  214550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:59:56.361926  214550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:59:56.873997  214550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:59:57.413691  214550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:59:57.414774  214550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:59:57.417706  214550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:59:57.421342  214550 out.go:252]   - Booting up control plane ...
	I1123 08:59:57.421437  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:59:57.426176  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:59:57.426253  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:59:57.445605  214550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:59:57.445714  214550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:59:57.456012  214550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:59:57.456111  214550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:59:57.456152  214550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:59:57.617060  214550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:59:57.617179  214550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:59:56.136350  216074 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:56.136541  216074 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:56.136667  216074 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:57.121922  216074 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:57.436901  216074 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:57.609063  216074 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:58.013484  216074 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:58.298959  216074 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:58.303729  216074 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-672503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:58.349481  216074 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:58.350030  216074 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-672503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:59.325836  216074 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:00:00.299809  216074 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:59.119693  214550 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500938234s
	I1123 08:59:59.122603  214550 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:59:59.122949  214550 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1123 08:59:59.123601  214550 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:59:59.124077  214550 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:00:00.879718  216074 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:00:00.879799  216074 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:00:01.122151  216074 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:00:03.397018  216074 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:00:05.387724  216074 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:00:05.691737  216074 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:00:06.099799  216074 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:00:06.099904  216074 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:00:06.107751  216074 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:00:03.716327  214550 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.591863015s
	I1123 09:00:09.442146  214550 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.317417042s
	I1123 09:00:09.630647  214550 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.507233792s
	I1123 09:00:09.661041  214550 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:09.696775  214550 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:09.724658  214550 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:09.725105  214550 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-118762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:09.789313  214550 kubeadm.go:319] [bootstrap-token] Using token: d97ou5.m8drvm11cz5qqhuf
	I1123 09:00:06.111147  216074 out.go:252]   - Booting up control plane ...
	I1123 09:00:06.111260  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:00:06.111338  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:00:06.111425  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:00:06.141906  216074 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:00:06.142016  216074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:00:06.152623  216074 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:00:06.152727  216074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:00:06.152767  216074 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:00:06.424623  216074 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:00:06.424743  216074 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:00:07.419394  216074 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001849125s
	I1123 09:00:07.422769  216074 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:00:07.422861  216074 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 09:00:07.423174  216074 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:00:07.423260  216074 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:00:09.792446  214550 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:09.792565  214550 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:09.822919  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:09.841947  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:09.852584  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:09.860084  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:09.867079  214550 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:10.041393  214550 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:10.492226  214550 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:11.049466  214550 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:11.050970  214550 kubeadm.go:319] 
	I1123 09:00:11.051044  214550 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:11.051049  214550 kubeadm.go:319] 
	I1123 09:00:11.051126  214550 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:11.051130  214550 kubeadm.go:319] 
	I1123 09:00:11.051155  214550 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:11.054107  214550 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:11.054173  214550 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:11.054178  214550 kubeadm.go:319] 
	I1123 09:00:11.054232  214550 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:11.054259  214550 kubeadm.go:319] 
	I1123 09:00:11.054308  214550 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:11.054312  214550 kubeadm.go:319] 
	I1123 09:00:11.054364  214550 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:11.054439  214550 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:11.054508  214550 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:11.054514  214550 kubeadm.go:319] 
	I1123 09:00:11.054918  214550 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:11.054999  214550 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:11.055003  214550 kubeadm.go:319] 
	I1123 09:00:11.055310  214550 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token d97ou5.m8drvm11cz5qqhuf \
	I1123 09:00:11.055433  214550 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 09:00:11.055653  214550 kubeadm.go:319] 	--control-plane 
	I1123 09:00:11.055662  214550 kubeadm.go:319] 
	I1123 09:00:11.056081  214550 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:11.056091  214550 kubeadm.go:319] 
	I1123 09:00:11.056374  214550 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token d97ou5.m8drvm11cz5qqhuf \
	I1123 09:00:11.056668  214550 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 09:00:11.065038  214550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:11.065464  214550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:11.065590  214550 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:11.065601  214550 cni.go:84] Creating CNI manager for ""
	I1123 09:00:11.065609  214550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:00:11.068935  214550 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:11.071817  214550 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:11.083987  214550 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:11.084065  214550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:11.157462  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:00:11.877723  214550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:00:11.877851  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:11.877919  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-118762 minikube.k8s.io/updated_at=2025_11_23T09_00_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=default-k8s-diff-port-118762 minikube.k8s.io/primary=true
	I1123 09:00:12.400645  214550 ops.go:34] apiserver oom_adj: -16
	I1123 09:00:12.400749  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.479703  216074 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.056359214s
	I1123 09:00:12.901058  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:13.400921  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:13.901348  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.400890  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.901622  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:15.401708  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:15.797055  214550 kubeadm.go:1114] duration metric: took 3.919248598s to wait for elevateKubeSystemPrivileges
	I1123 09:00:15.797081  214550 kubeadm.go:403] duration metric: took 27.027055323s to StartCluster
	I1123 09:00:15.797098  214550 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:15.797159  214550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:00:15.797780  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:15.797984  214550 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:00:15.798066  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:00:15.798303  214550 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:00:15.798340  214550 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:00:15.798395  214550 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-118762"
	I1123 09:00:15.798414  214550 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-118762"
	I1123 09:00:15.798437  214550 host.go:66] Checking if "default-k8s-diff-port-118762" exists ...
	I1123 09:00:15.798912  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.799494  214550 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-118762"
	I1123 09:00:15.799518  214550 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-118762"
	I1123 09:00:15.799812  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.802617  214550 out.go:179] * Verifying Kubernetes components...
	I1123 09:00:15.805826  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:15.840681  214550 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-118762"
	I1123 09:00:15.840730  214550 host.go:66] Checking if "default-k8s-diff-port-118762" exists ...
	I1123 09:00:15.841178  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.841365  214550 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:00:15.845719  214550 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:15.845739  214550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:00:15.845799  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 09:00:15.885107  214550 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:15.885129  214550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:00:15.885196  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 09:00:15.885424  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 09:00:15.922980  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 09:00:16.516094  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:00:16.516301  214550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:16.565568  214550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:16.660294  214550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:17.770086  214550 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.253733356s)
	I1123 09:00:17.770803  214550 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-118762" to be "Ready" ...
	I1123 09:00:17.771113  214550 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.254946263s)
	I1123 09:00:17.771140  214550 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 09:00:18.288784  214550 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-118762" context rescaled to 1 replicas
	I1123 09:00:18.294378  214550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.634044217s)
	I1123 09:00:18.294508  214550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.728864491s)
	I1123 09:00:18.313019  214550 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 09:00:18.174934  216074 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.752142419s
	I1123 09:00:18.924553  216074 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.501560337s
	I1123 09:00:18.944911  216074 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:18.969340  216074 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:18.982694  216074 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:18.982935  216074 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-672503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:18.996135  216074 kubeadm.go:319] [bootstrap-token] Using token: n9250s.xdwmypsz1r225um6
	I1123 09:00:18.999202  216074 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:18.999323  216074 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:19.010682  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:19.023889  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:19.027010  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:19.034948  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:19.039786  216074 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:19.331973  216074 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:19.770619  216074 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:20.331084  216074 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:20.332385  216074 kubeadm.go:319] 
	I1123 09:00:20.332460  216074 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:20.332472  216074 kubeadm.go:319] 
	I1123 09:00:20.332550  216074 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:20.332554  216074 kubeadm.go:319] 
	I1123 09:00:20.332585  216074 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:20.332649  216074 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:20.332706  216074 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:20.332714  216074 kubeadm.go:319] 
	I1123 09:00:20.332768  216074 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:20.332775  216074 kubeadm.go:319] 
	I1123 09:00:20.332826  216074 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:20.332834  216074 kubeadm.go:319] 
	I1123 09:00:20.332886  216074 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:20.332964  216074 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:20.333036  216074 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:20.333044  216074 kubeadm.go:319] 
	I1123 09:00:20.333141  216074 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:20.333222  216074 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:20.333230  216074 kubeadm.go:319] 
	I1123 09:00:20.333314  216074 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n9250s.xdwmypsz1r225um6 \
	I1123 09:00:20.333421  216074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 09:00:20.333454  216074 kubeadm.go:319] 	--control-plane 
	I1123 09:00:20.333461  216074 kubeadm.go:319] 
	I1123 09:00:20.333554  216074 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:20.333574  216074 kubeadm.go:319] 
	I1123 09:00:20.333657  216074 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n9250s.xdwmypsz1r225um6 \
	I1123 09:00:20.333764  216074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 09:00:20.339187  216074 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:20.339460  216074 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:20.339572  216074 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:20.339594  216074 cni.go:84] Creating CNI manager for ""
	I1123 09:00:20.339606  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:00:20.342914  216074 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:20.345744  216074 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:20.350352  216074 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:20.350371  216074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:20.365062  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:00:18.315850  214550 addons.go:530] duration metric: took 2.517504837s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 09:00:19.773873  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:21.774051  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:20.682862  216074 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:00:20.683008  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:20.683107  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-672503 minikube.k8s.io/updated_at=2025_11_23T09_00_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=embed-certs-672503 minikube.k8s.io/primary=true
	I1123 09:00:20.861424  216074 ops.go:34] apiserver oom_adj: -16
	I1123 09:00:20.881440  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:21.382484  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:21.881564  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:22.381797  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:22.881698  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:23.382044  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:23.881478  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:24.381553  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:24.882135  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:25.085445  216074 kubeadm.go:1114] duration metric: took 4.402483472s to wait for elevateKubeSystemPrivileges
	I1123 09:00:25.085479  216074 kubeadm.go:403] duration metric: took 29.334704925s to StartCluster
	I1123 09:00:25.085499  216074 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:25.085586  216074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:00:25.087626  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:25.087936  216074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:00:25.088691  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:00:25.089017  216074 config.go:182] Loaded profile config "embed-certs-672503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:00:25.089061  216074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:00:25.089133  216074 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-672503"
	I1123 09:00:25.089153  216074 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-672503"
	I1123 09:00:25.089179  216074 host.go:66] Checking if "embed-certs-672503" exists ...
	I1123 09:00:25.089653  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.090352  216074 addons.go:70] Setting default-storageclass=true in profile "embed-certs-672503"
	I1123 09:00:25.090381  216074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672503"
	I1123 09:00:25.090715  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.093412  216074 out.go:179] * Verifying Kubernetes components...
	I1123 09:00:25.100650  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:25.132922  216074 addons.go:239] Setting addon default-storageclass=true in "embed-certs-672503"
	I1123 09:00:25.132970  216074 host.go:66] Checking if "embed-certs-672503" exists ...
	I1123 09:00:25.133464  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.134451  216074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:00:25.137634  216074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:25.137660  216074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:00:25.137734  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 09:00:25.175531  216074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:25.175555  216074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:00:25.175631  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 09:00:25.190357  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 09:00:25.214325  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 09:00:25.395679  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:00:25.445659  216074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:25.568912  216074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:25.606764  216074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:26.047827  216074 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 09:00:26.050542  216074 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672503" to be "Ready" ...
	I1123 09:00:26.465272  216074 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1123 09:00:23.774226  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:26.274269  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:26.468271  216074 addons.go:530] duration metric: took 1.379204566s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 09:00:26.552103  216074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-672503" context rescaled to 1 replicas
	W1123 09:00:28.054477  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:30.054656  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:28.774465  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:30.774882  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:32.553443  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:35.054660  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:33.274428  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:35.774260  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:37.554121  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:40.055622  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:38.273771  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:40.773644  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:42.553668  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:44.553840  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:43.273604  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:45.275951  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:47.773735  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:47.054612  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:49.553846  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:49.774526  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:52.273699  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:51.554200  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:54.053723  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:54.274489  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:56.773822  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:57.776587  214550 node_ready.go:49] node "default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:57.776614  214550 node_ready.go:38] duration metric: took 40.005787911s for node "default-k8s-diff-port-118762" to be "Ready" ...
	I1123 09:00:57.776628  214550 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:00:57.776688  214550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:00:57.792566  214550 api_server.go:72] duration metric: took 41.994554549s to wait for apiserver process to appear ...
	I1123 09:00:57.792589  214550 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:00:57.792608  214550 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 09:00:57.801332  214550 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 09:00:57.802591  214550 api_server.go:141] control plane version: v1.34.1
	I1123 09:00:57.802671  214550 api_server.go:131] duration metric: took 10.074405ms to wait for apiserver health ...
	I1123 09:00:57.802696  214550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:00:57.806165  214550 system_pods.go:59] 8 kube-system pods found
	I1123 09:00:57.806249  214550 system_pods.go:61] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:57.806272  214550 system_pods.go:61] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:57.806312  214550 system_pods.go:61] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:57.806336  214550 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:57.806359  214550 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:57.806397  214550 system_pods.go:61] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:57.806420  214550 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:57.806446  214550 system_pods.go:61] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:57.806485  214550 system_pods.go:74] duration metric: took 3.749386ms to wait for pod list to return data ...
	I1123 09:00:57.806513  214550 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:00:57.809265  214550 default_sa.go:45] found service account: "default"
	I1123 09:00:57.809285  214550 default_sa.go:55] duration metric: took 2.751519ms for default service account to be created ...
	I1123 09:00:57.809298  214550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:00:57.811926  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:57.811955  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:57.811962  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:57.811968  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:57.811972  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:57.811977  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:57.811980  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:57.811984  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:57.811991  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:57.812009  214550 retry.go:31] will retry after 274.029839ms: missing components: kube-dns
	I1123 09:00:58.095441  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:58.095474  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:58.095481  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:58.095487  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:58.095491  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:58.095497  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:58.095502  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:58.095506  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:58.095511  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:58.095526  214550 retry.go:31] will retry after 259.858354ms: missing components: kube-dns
	I1123 09:00:58.359494  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:58.359527  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Running
	I1123 09:00:58.359536  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:58.359542  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:58.359546  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:58.359551  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:58.359556  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:58.359560  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:58.359564  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Running
	I1123 09:00:58.359572  214550 system_pods.go:126] duration metric: took 550.268629ms to wait for k8s-apps to be running ...
	I1123 09:00:58.359583  214550 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:00:58.359641  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:58.373607  214550 system_svc.go:56] duration metric: took 14.015669ms WaitForService to wait for kubelet
	I1123 09:00:58.373638  214550 kubeadm.go:587] duration metric: took 42.575629379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:58.373657  214550 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:00:58.376361  214550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:00:58.376394  214550 node_conditions.go:123] node cpu capacity is 2
	I1123 09:00:58.376408  214550 node_conditions.go:105] duration metric: took 2.746055ms to run NodePressure ...
	I1123 09:00:58.376419  214550 start.go:242] waiting for startup goroutines ...
	I1123 09:00:58.376427  214550 start.go:247] waiting for cluster config update ...
	I1123 09:00:58.376438  214550 start.go:256] writing updated cluster config ...
	I1123 09:00:58.376721  214550 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:58.380292  214550 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:58.385153  214550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r5snd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.390028  214550 pod_ready.go:94] pod "coredns-66bc5c9577-r5snd" is "Ready"
	I1123 09:00:58.390067  214550 pod_ready.go:86] duration metric: took 4.884639ms for pod "coredns-66bc5c9577-r5snd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.392315  214550 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.396380  214550 pod_ready.go:94] pod "etcd-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.396450  214550 pod_ready.go:86] duration metric: took 4.109265ms for pod "etcd-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.398716  214550 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.403219  214550 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.403254  214550 pod_ready.go:86] duration metric: took 4.51516ms for pod "kube-apiserver-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.405723  214550 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.785140  214550 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.785167  214550 pod_ready.go:86] duration metric: took 379.369705ms for pod "kube-controller-manager-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.985264  214550 pod_ready.go:83] waiting for pod "kube-proxy-fwc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.387683  214550 pod_ready.go:94] pod "kube-proxy-fwc9v" is "Ready"
	I1123 09:00:59.387712  214550 pod_ready.go:86] duration metric: took 402.417123ms for pod "kube-proxy-fwc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.588360  214550 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.985884  214550 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:59.985910  214550 pod_ready.go:86] duration metric: took 397.484705ms for pod "kube-scheduler-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.985924  214550 pod_ready.go:40] duration metric: took 1.605599928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:00.360876  214550 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:00.365235  214550 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-118762" cluster and "default" namespace by default
	W1123 09:00:56.054171  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:58.059777  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:00.201612  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:02.554079  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:05.054145  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	I1123 09:01:06.553619  216074 node_ready.go:49] node "embed-certs-672503" is "Ready"
	I1123 09:01:06.553653  216074 node_ready.go:38] duration metric: took 40.503031578s for node "embed-certs-672503" to be "Ready" ...
	I1123 09:01:06.553667  216074 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:01:06.553728  216074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:01:06.566313  216074 api_server.go:72] duration metric: took 41.478343311s to wait for apiserver process to appear ...
	I1123 09:01:06.566341  216074 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:01:06.566374  216074 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:01:06.574435  216074 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:01:06.575998  216074 api_server.go:141] control plane version: v1.34.1
	I1123 09:01:06.576024  216074 api_server.go:131] duration metric: took 9.676749ms to wait for apiserver health ...
	I1123 09:01:06.576034  216074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:01:06.579331  216074 system_pods.go:59] 8 kube-system pods found
	I1123 09:01:06.579491  216074 system_pods.go:61] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.579500  216074 system_pods.go:61] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.579506  216074 system_pods.go:61] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.579511  216074 system_pods.go:61] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.579516  216074 system_pods.go:61] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.579524  216074 system_pods.go:61] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.579529  216074 system_pods.go:61] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.579541  216074 system_pods.go:61] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.579548  216074 system_pods.go:74] duration metric: took 3.508309ms to wait for pod list to return data ...
	I1123 09:01:06.579562  216074 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:01:06.584140  216074 default_sa.go:45] found service account: "default"
	I1123 09:01:06.584219  216074 default_sa.go:55] duration metric: took 4.649963ms for default service account to be created ...
	I1123 09:01:06.584244  216074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:01:06.587869  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:06.587906  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.587913  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.587919  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.587923  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.587929  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.587933  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.587938  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.587945  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.587968  216074 retry.go:31] will retry after 247.424175ms: missing components: kube-dns
	I1123 09:01:06.841170  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:06.841208  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.841215  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.841222  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.841227  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.841232  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.841237  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.841241  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.841246  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.841262  216074 retry.go:31] will retry after 283.378756ms: missing components: kube-dns
	I1123 09:01:07.129581  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.129666  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.129688  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.129732  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.129759  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.129784  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.129819  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.129847  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.129870  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.129915  216074 retry.go:31] will retry after 365.111173ms: missing components: kube-dns
	I1123 09:01:07.499321  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.499446  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.499463  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.499471  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.499475  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.499500  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.499508  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.499546  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.499559  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.499580  216074 retry.go:31] will retry after 378.113017ms: missing components: kube-dns
	I1123 09:01:07.881489  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.881535  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.881542  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.881549  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.881554  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.881559  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.881562  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.881566  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.881570  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.881588  216074 retry.go:31] will retry after 690.773315ms: missing components: kube-dns
	I1123 09:01:08.576591  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:08.576623  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Running
	I1123 09:01:08.576630  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:08.576635  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:08.576657  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:08.576662  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:08.576666  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:08.576671  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:08.576676  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:08.576687  216074 system_pods.go:126] duration metric: took 1.992424101s to wait for k8s-apps to be running ...
	I1123 09:01:08.576700  216074 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:01:08.576756  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:01:08.591468  216074 system_svc.go:56] duration metric: took 14.759167ms WaitForService to wait for kubelet
	I1123 09:01:08.591497  216074 kubeadm.go:587] duration metric: took 43.503532438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:01:08.591516  216074 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:01:08.594570  216074 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:01:08.594606  216074 node_conditions.go:123] node cpu capacity is 2
	I1123 09:01:08.594621  216074 node_conditions.go:105] duration metric: took 3.099272ms to run NodePressure ...
	I1123 09:01:08.594634  216074 start.go:242] waiting for startup goroutines ...
	I1123 09:01:08.594642  216074 start.go:247] waiting for cluster config update ...
	I1123 09:01:08.594654  216074 start.go:256] writing updated cluster config ...
	I1123 09:01:08.594942  216074 ssh_runner.go:195] Run: rm -f paused
	I1123 09:01:08.598542  216074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:08.602701  216074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nhnbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.608070  216074 pod_ready.go:94] pod "coredns-66bc5c9577-nhnbc" is "Ready"
	I1123 09:01:08.608097  216074 pod_ready.go:86] duration metric: took 5.358349ms for pod "coredns-66bc5c9577-nhnbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.610514  216074 pod_ready.go:83] waiting for pod "etcd-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.615555  216074 pod_ready.go:94] pod "etcd-embed-certs-672503" is "Ready"
	I1123 09:01:08.615582  216074 pod_ready.go:86] duration metric: took 5.042688ms for pod "etcd-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.618015  216074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.624626  216074 pod_ready.go:94] pod "kube-apiserver-embed-certs-672503" is "Ready"
	I1123 09:01:08.624654  216074 pod_ready.go:86] duration metric: took 6.607794ms for pod "kube-apiserver-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.632607  216074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.003276  216074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-672503" is "Ready"
	I1123 09:01:09.003305  216074 pod_ready.go:86] duration metric: took 370.669957ms for pod "kube-controller-manager-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.204229  216074 pod_ready.go:83] waiting for pod "kube-proxy-wbnjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.603471  216074 pod_ready.go:94] pod "kube-proxy-wbnjd" is "Ready"
	I1123 09:01:09.603500  216074 pod_ready.go:86] duration metric: took 399.242725ms for pod "kube-proxy-wbnjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.802674  216074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:10.203777  216074 pod_ready.go:94] pod "kube-scheduler-embed-certs-672503" is "Ready"
	I1123 09:01:10.203816  216074 pod_ready.go:86] duration metric: took 401.074978ms for pod "kube-scheduler-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:10.203830  216074 pod_ready.go:40] duration metric: took 1.605254448s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:10.258134  216074 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:10.261593  216074 out.go:179] * Done! kubectl is now configured to use "embed-certs-672503" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	ee485d2d85455       1611cd07b61d5       8 seconds ago        Running             busybox                   0                   6a938610ed5dc       busybox                                                default
	0ba4019410979       ba04bb24b9575       14 seconds ago       Running             storage-provisioner       0                   5020fccfe224f       storage-provisioner                                    kube-system
	70910ddc2313a       138784d87c9c5       14 seconds ago       Running             coredns                   0                   44978605b7387       coredns-66bc5c9577-r5snd                               kube-system
	cf43bad326873       b1a8c6f707935       55 seconds ago       Running             kindnet-cni               0                   77874024967df       kindnet-6vk7l                                          kube-system
	bc14f8da099ba       05baa95f5142d       55 seconds ago       Running             kube-proxy                0                   8b9d1b836c808       kube-proxy-fwc9v                                       kube-system
	09ad8e6abf33a       a1894772a478e       About a minute ago   Running             etcd                      0                   32ba499f97a91       etcd-default-k8s-diff-port-118762                      kube-system
	bd51fcd97f080       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   446c99929758b       kube-controller-manager-default-k8s-diff-port-118762   kube-system
	7cf9d65a2dbbc       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   dbcd52b3e1ed4       kube-scheduler-default-k8s-diff-port-118762            kube-system
	e44571e8430b7       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   618698cf61b6c       kube-apiserver-default-k8s-diff-port-118762            kube-system
	
	
	==> containerd <==
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.907568691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:d0fab715-c08e-4a99-a6ba-4b4837f47aaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5020fccfe224fa41e7c5a4304f87ac89370f5441ed25ff7f66648f1e73d92228\""
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.909752658Z" level=info msg="connecting to shim 70910ddc2313a5e0c777904ee33fd767a89765f0c9caba6cae5f963668afc2ab" address="unix:///run/containerd/s/9941077eab7d87f0200db9d032b8f718ab3cdf55a4ba3c0ed51644876741436b" protocol=ttrpc version=3
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.920596201Z" level=info msg="CreateContainer within sandbox \"5020fccfe224fa41e7c5a4304f87ac89370f5441ed25ff7f66648f1e73d92228\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.933121561Z" level=info msg="Container 0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.948690578Z" level=info msg="CreateContainer within sandbox \"5020fccfe224fa41e7c5a4304f87ac89370f5441ed25ff7f66648f1e73d92228\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd\""
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.954433988Z" level=info msg="StartContainer for \"0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd\""
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.959736394Z" level=info msg="connecting to shim 0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd" address="unix:///run/containerd/s/8ae5e588359c9237fc4bcb667c3a9546bb9504eae85b4510bb8051af51ef3f9f" protocol=ttrpc version=3
	Nov 23 09:00:58 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:58.011881271Z" level=info msg="StartContainer for \"70910ddc2313a5e0c777904ee33fd767a89765f0c9caba6cae5f963668afc2ab\" returns successfully"
	Nov 23 09:00:58 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:58.038380970Z" level=info msg="StartContainer for \"0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd\" returns successfully"
	Nov 23 09:01:01 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:01.021979558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:5c2314ab-27c6-4441-889f-af501dd53560,Namespace:default,Attempt:0,}"
	Nov 23 09:01:01 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:01.075803931Z" level=info msg="connecting to shim 6a938610ed5dcd0514f1594e43a0d209ef36d2162909060ca02239208fafea68" address="unix:///run/containerd/s/914f2ffdf4df9e947b743ded4ade49c0ce040d8ecec6d4d1c7f9f93cc6578315" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:01:01 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:01.142056966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:5c2314ab-27c6-4441-889f-af501dd53560,Namespace:default,Attempt:0,} returns sandbox id \"6a938610ed5dcd0514f1594e43a0d209ef36d2162909060ca02239208fafea68\""
	Nov 23 09:01:01 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:01.144729109Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.378342421Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.380249258Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937191"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.382738704Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.386861893Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.387334299Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.242556319s"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.387415374Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.395650955Z" level=info msg="CreateContainer within sandbox \"6a938610ed5dcd0514f1594e43a0d209ef36d2162909060ca02239208fafea68\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.408565307Z" level=info msg="Container ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.419031198Z" level=info msg="CreateContainer within sandbox \"6a938610ed5dcd0514f1594e43a0d209ef36d2162909060ca02239208fafea68\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64\""
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.420053491Z" level=info msg="StartContainer for \"ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64\""
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.420978002Z" level=info msg="connecting to shim ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64" address="unix:///run/containerd/s/914f2ffdf4df9e947b743ded4ade49c0ce040d8ecec6d4d1c7f9f93cc6578315" protocol=ttrpc version=3
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.476406721Z" level=info msg="StartContainer for \"ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64\" returns successfully"
	
	
	==> coredns [70910ddc2313a5e0c777904ee33fd767a89765f0c9caba6cae5f963668afc2ab] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44118 - 742 "HINFO IN 592143518793182462.6728500283451617551. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016244033s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-118762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-118762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-118762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_00_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:00:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-118762
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:01:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:00:57 +0000   Sun, 23 Nov 2025 09:00:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:00:57 +0000   Sun, 23 Nov 2025 09:00:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:00:57 +0000   Sun, 23 Nov 2025 09:00:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:00:57 +0000   Sun, 23 Nov 2025 09:00:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-118762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                2cb290fb-8655-472e-b198-65084610e8db
	  Boot ID:                    86d8501c-1df5-4d7e-90cb-d9ad951202c5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-r5snd                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-default-k8s-diff-port-118762                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-6vk7l                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-default-k8s-diff-port-118762             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-118762    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-fwc9v                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-default-k8s-diff-port-118762             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 54s                kube-proxy       
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  73s (x8 over 74s)  kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     73s (x7 over 74s)  kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    73s (x8 over 74s)  kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  61s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node default-k8s-diff-port-118762 event: Registered Node default-k8s-diff-port-118762 in Controller
	  Normal   NodeReady                15s                kubelet          Node default-k8s-diff-port-118762 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014670] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505841] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033008] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738583] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.057424] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:26] hrtimer: interrupt took 58442338 ns
	
	
	==> etcd [09ad8e6abf33a65f71b353c02b9db597ae8f1ce72e3af1ef89165c0123b77e26] <==
	{"level":"warn","ts":"2025-11-23T09:00:04.385820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.403998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.465382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.472971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.505281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.538229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.558054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.581810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.623846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.641989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.685344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.703575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.739608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.767858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.803454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.823846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.851452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.880249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.900620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.953695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.970163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:05.005077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:05.024931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:05.049222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:05.207649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:01:12 up  1:43,  0 user,  load average: 2.63, 3.48, 2.96
	Linux default-k8s-diff-port-118762 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cf43bad32687302a32aff514643f251ad92d683a18f4ad0a7bc50bf5789f2ea2] <==
	I1123 09:00:17.131135       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:00:17.131405       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:00:17.131539       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:00:17.131552       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:00:17.131565       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:00:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:00:17.362314       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:00:17.362334       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:00:17.362343       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:00:17.362642       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:00:47.359794       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:00:47.363426       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:00:47.363434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 09:00:47.363584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 09:00:48.862638       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:00:48.862747       1 metrics.go:72] Registering metrics
	I1123 09:00:48.862838       1 controller.go:711] "Syncing nftables rules"
	I1123 09:00:57.365004       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:00:57.365046       1 main.go:301] handling current node
	I1123 09:01:07.361433       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:01:07.361478       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e44571e8430b7b63843bce11f9d3695233d4db2d003a5243d4835a53b1578eb7] <==
	I1123 09:00:06.964288       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:00:06.964465       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:00:06.972767       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:00:06.984393       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:00:06.984569       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:00:06.985925       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:06.987053       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:00:07.382156       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:00:07.406558       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:00:07.406753       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:00:08.845585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:00:09.081019       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:00:09.332957       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:00:09.377082       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 09:00:09.378583       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:00:09.393243       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:00:09.504263       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:00:10.457185       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:00:10.485950       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:00:10.502730       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:00:14.778660       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:14.785690       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:15.246256       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:00:15.544810       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:01:10.896098       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:58142: use of closed network connection
	
	
	==> kube-controller-manager [bd51fcd97f080424304216ba2d43e32e3983e2704297754815c3137df1a04a3b] <==
	I1123 09:00:14.586170       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:00:14.591567       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:00:14.591919       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:00:14.592096       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:00:14.592207       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:00:14.592728       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:00:14.591685       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:00:14.593056       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:00:14.593257       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-118762"
	I1123 09:00:14.593377       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 09:00:14.591701       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:00:14.596291       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:00:14.606984       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:00:14.607316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:00:14.607448       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:00:14.607572       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:00:14.591507       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:00:14.610289       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:14.610309       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:14.619716       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:14.619927       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:00:14.619941       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:00:14.620014       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:00:14.621424       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:00:59.597721       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bc14f8da099ba3492358f197cf0623d7d6ca4a0ef5346cdd263dd0dfa657c208] <==
	I1123 09:00:17.169783       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:00:17.399059       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:00:17.519171       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:00:17.519207       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:00:17.519288       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:00:17.657921       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:00:17.657987       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:00:17.670728       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:00:17.671081       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:00:17.671103       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:17.672690       1 config.go:200] "Starting service config controller"
	I1123 09:00:17.672715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:00:17.672734       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:00:17.672738       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:00:17.672748       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:00:17.672752       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:00:17.673620       1 config.go:309] "Starting node config controller"
	I1123 09:00:17.673634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:00:17.673641       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:00:17.773079       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:00:17.773125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:00:17.773177       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7cf9d65a2dbbc14a2fb50e2921407c3f809339e7f9aac648cde3f0fe0c231ff1] <==
	I1123 09:00:06.085648       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:00:09.385214       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:00:09.385330       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:09.395529       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:00:09.395763       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:00:09.395917       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:09.396014       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:09.396169       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:09.396303       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:09.399952       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:00:09.400054       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:00:09.497820       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:09.497895       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 09:00:09.498023       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:00:11 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:11.711288    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-118762" podStartSLOduration=0.711269115 podStartE2EDuration="711.269115ms" podCreationTimestamp="2025-11-23 09:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:11.678666383 +0000 UTC m=+1.292032469" watchObservedRunningTime="2025-11-23 09:00:11.711269115 +0000 UTC m=+1.324635177"
	Nov 23 09:00:11 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:11.741834    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-118762" podStartSLOduration=0.741814865 podStartE2EDuration="741.814865ms" podCreationTimestamp="2025-11-23 09:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:11.715859072 +0000 UTC m=+1.329225125" watchObservedRunningTime="2025-11-23 09:00:11.741814865 +0000 UTC m=+1.355180927"
	Nov 23 09:00:11 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:11.775224    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-118762" podStartSLOduration=0.775204803 podStartE2EDuration="775.204803ms" podCreationTimestamp="2025-11-23 09:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:11.742224665 +0000 UTC m=+1.355590727" watchObservedRunningTime="2025-11-23 09:00:11.775204803 +0000 UTC m=+1.388570857"
	Nov 23 09:00:14 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:14.599097    1469 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:00:14 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:14.599938    1469 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758161    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86gv\" (UniqueName: \"kubernetes.io/projected/d4b1b360-1ad9-4d21-bf09-34d8328640f7-kube-api-access-f86gv\") pod \"kube-proxy-fwc9v\" (UID: \"d4b1b360-1ad9-4d21-bf09-34d8328640f7\") " pod="kube-system/kube-proxy-fwc9v"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758675    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4b1b360-1ad9-4d21-bf09-34d8328640f7-lib-modules\") pod \"kube-proxy-fwc9v\" (UID: \"d4b1b360-1ad9-4d21-bf09-34d8328640f7\") " pod="kube-system/kube-proxy-fwc9v"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758797    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/110880c9-bd5d-4589-b067-2b1f1168fa0c-cni-cfg\") pod \"kindnet-6vk7l\" (UID: \"110880c9-bd5d-4589-b067-2b1f1168fa0c\") " pod="kube-system/kindnet-6vk7l"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758894    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/110880c9-bd5d-4589-b067-2b1f1168fa0c-xtables-lock\") pod \"kindnet-6vk7l\" (UID: \"110880c9-bd5d-4589-b067-2b1f1168fa0c\") " pod="kube-system/kindnet-6vk7l"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758999    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92wmn\" (UniqueName: \"kubernetes.io/projected/110880c9-bd5d-4589-b067-2b1f1168fa0c-kube-api-access-92wmn\") pod \"kindnet-6vk7l\" (UID: \"110880c9-bd5d-4589-b067-2b1f1168fa0c\") " pod="kube-system/kindnet-6vk7l"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.759093    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4b1b360-1ad9-4d21-bf09-34d8328640f7-xtables-lock\") pod \"kube-proxy-fwc9v\" (UID: \"d4b1b360-1ad9-4d21-bf09-34d8328640f7\") " pod="kube-system/kube-proxy-fwc9v"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.759181    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/110880c9-bd5d-4589-b067-2b1f1168fa0c-lib-modules\") pod \"kindnet-6vk7l\" (UID: \"110880c9-bd5d-4589-b067-2b1f1168fa0c\") " pod="kube-system/kindnet-6vk7l"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.759281    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4b1b360-1ad9-4d21-bf09-34d8328640f7-kube-proxy\") pod \"kube-proxy-fwc9v\" (UID: \"d4b1b360-1ad9-4d21-bf09-34d8328640f7\") " pod="kube-system/kube-proxy-fwc9v"
	Nov 23 09:00:16 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:16.032336    1469 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:00:18 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:18.101381    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fwc9v" podStartSLOduration=3.101361107 podStartE2EDuration="3.101361107s" podCreationTimestamp="2025-11-23 09:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:18.036753674 +0000 UTC m=+7.650119736" watchObservedRunningTime="2025-11-23 09:00:18.101361107 +0000 UTC m=+7.714727161"
	Nov 23 09:00:20 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:20.804650    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6vk7l" podStartSLOduration=5.804629121 podStartE2EDuration="5.804629121s" podCreationTimestamp="2025-11-23 09:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:18.168279023 +0000 UTC m=+7.781645093" watchObservedRunningTime="2025-11-23 09:00:20.804629121 +0000 UTC m=+10.417995183"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.371436    1469 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.426359    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cacf6afe-5fee-4f94-8eb9-c7c24526cf27-config-volume\") pod \"coredns-66bc5c9577-r5snd\" (UID: \"cacf6afe-5fee-4f94-8eb9-c7c24526cf27\") " pod="kube-system/coredns-66bc5c9577-r5snd"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.426437    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5fl\" (UniqueName: \"kubernetes.io/projected/cacf6afe-5fee-4f94-8eb9-c7c24526cf27-kube-api-access-rs5fl\") pod \"coredns-66bc5c9577-r5snd\" (UID: \"cacf6afe-5fee-4f94-8eb9-c7c24526cf27\") " pod="kube-system/coredns-66bc5c9577-r5snd"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.527252    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxcm\" (UniqueName: \"kubernetes.io/projected/d0fab715-c08e-4a99-a6ba-4b4837f47aaf-kube-api-access-sjxcm\") pod \"storage-provisioner\" (UID: \"d0fab715-c08e-4a99-a6ba-4b4837f47aaf\") " pod="kube-system/storage-provisioner"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.527318    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d0fab715-c08e-4a99-a6ba-4b4837f47aaf-tmp\") pod \"storage-provisioner\" (UID: \"d0fab715-c08e-4a99-a6ba-4b4837f47aaf\") " pod="kube-system/storage-provisioner"
	Nov 23 09:00:58 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:58.174501    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r5snd" podStartSLOduration=43.174480426 podStartE2EDuration="43.174480426s" podCreationTimestamp="2025-11-23 09:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:58.151526592 +0000 UTC m=+47.764892662" watchObservedRunningTime="2025-11-23 09:00:58.174480426 +0000 UTC m=+47.787846480"
	Nov 23 09:00:58 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:58.194280    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.194260207 podStartE2EDuration="40.194260207s" podCreationTimestamp="2025-11-23 09:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:58.175382341 +0000 UTC m=+47.788748493" watchObservedRunningTime="2025-11-23 09:00:58.194260207 +0000 UTC m=+47.807626261"
	Nov 23 09:01:00 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:01:00.785585    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmdg\" (UniqueName: \"kubernetes.io/projected/5c2314ab-27c6-4441-889f-af501dd53560-kube-api-access-wtmdg\") pod \"busybox\" (UID: \"5c2314ab-27c6-4441-889f-af501dd53560\") " pod="default/busybox"
	Nov 23 09:01:04 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:01:04.239782    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.995372546 podStartE2EDuration="4.23976421s" podCreationTimestamp="2025-11-23 09:01:00 +0000 UTC" firstStartedPulling="2025-11-23 09:01:01.143978392 +0000 UTC m=+50.757344446" lastFinishedPulling="2025-11-23 09:01:03.388370056 +0000 UTC m=+53.001736110" observedRunningTime="2025-11-23 09:01:04.239259656 +0000 UTC m=+53.852625710" watchObservedRunningTime="2025-11-23 09:01:04.23976421 +0000 UTC m=+53.853130263"
	
	
	==> storage-provisioner [0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd] <==
	I1123 09:00:58.081648       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:00:58.098857       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:00:58.098919       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:00:58.101999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:58.111039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:00:58.111418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:00:58.111953       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa67e961-0088-43e8-a322-4cd46a51ea66", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-118762_a5d34672-e260-47df-a56c-b960d50ac6cd became leader
	I1123 09:00:58.112166       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-118762_a5d34672-e260-47df-a56c-b960d50ac6cd!
	W1123 09:00:58.121147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:58.128516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:00:58.214076       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-118762_a5d34672-e260-47df-a56c-b960d50ac6cd!
	W1123 09:01:00.281323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:00.357953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:02.361254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:02.369208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:04.372260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:04.376993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:06.380476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:06.390697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:08.394222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:08.399004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:10.402776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:10.415195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:12.418896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:12.424565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-118762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-118762
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-118762:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c",
	        "Created": "2025-11-23T08:59:39.122301538Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 215560,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:59:39.190221667Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c/hostname",
	        "HostsPath": "/var/lib/docker/containers/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c/hosts",
	        "LogPath": "/var/lib/docker/containers/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c/9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c-json.log",
	        "Name": "/default-k8s-diff-port-118762",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-118762:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-118762",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9b8dfb0e18006d27769ec29c639b765f48b4f1790ba4717d05f92b3bd1e28d8c",
	                "LowerDir": "/var/lib/docker/overlay2/f999f3409882cb4ddc869e7d40ae0cbb7d25319a3657e618b3d903ead519ef2d-init/diff:/var/lib/docker/overlay2/e1de88c117c0c773e1fa636243190fd97eadaa5a8e1ee08fd53827cbac767d35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f999f3409882cb4ddc869e7d40ae0cbb7d25319a3657e618b3d903ead519ef2d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f999f3409882cb4ddc869e7d40ae0cbb7d25319a3657e618b3d903ead519ef2d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f999f3409882cb4ddc869e7d40ae0cbb7d25319a3657e618b3d903ead519ef2d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-118762",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-118762/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-118762",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-118762",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-118762",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "384d6bbad0e8ca5751052a1b67261e1cd19d59c71672f2d31cbbeca0bdf614f9",
	            "SandboxKey": "/var/run/docker/netns/384d6bbad0e8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-118762": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:91:76:b6:ac:2a",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2b6d77ec61e96b127fbc34ebc64c03e7e497d95e117654f3d1a0ea3bd4bc6193",
	                    "EndpointID": "194f8cb7e543614697d6074a54a3b0fd34fcc4ff0587d794942dd4133f848483",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-118762",
	                        "9b8dfb0e1800"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-118762 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-118762 logs -n 25: (1.212520298s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ ssh     │ -p cilium-694698 sudo crio config                                                                                                                                                                                                                   │ cilium-694698                │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │                     │
	│ delete  │ -p cilium-694698                                                                                                                                                                                                                                    │ cilium-694698                │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p force-systemd-env-023309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p kubernetes-upgrade-291582                                                                                                                                                                                                                        │ kubernetes-upgrade-291582    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ force-systemd-env-023309 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p force-systemd-env-023309                                                                                                                                                                                                                         │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-options-886452 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ cert-options-886452 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ -p cert-options-886452 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-options-886452                                                                                                                                                                                                                              │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-132097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ stop    │ -p old-k8s-version-132097 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-132097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ old-k8s-version-132097 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p old-k8s-version-132097 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ unpause │ -p old-k8s-version-132097 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p old-k8s-version-132097                                                                                                                                                                                                                           │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p old-k8s-version-132097                                                                                                                                                                                                                           │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p default-k8s-diff-port-118762 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p cert-expiration-918102                                                                                                                                                                                                                           │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:01 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:59:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:59:40.577485  216074 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:59:40.577691  216074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:40.577718  216074 out.go:374] Setting ErrFile to fd 2...
	I1123 08:59:40.577739  216074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:40.578089  216074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:59:40.578573  216074 out.go:368] Setting JSON to false
	I1123 08:59:40.579525  216074 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6133,"bootTime":1763882248,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:59:40.579625  216074 start.go:143] virtualization:  
	I1123 08:59:40.583259  216074 out.go:179] * [embed-certs-672503] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:59:40.587830  216074 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:59:40.587967  216074 notify.go:221] Checking for updates...
	I1123 08:59:40.594558  216074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:59:40.597788  216074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:59:40.601027  216074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:59:40.604233  216074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:59:40.607539  216074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:59:40.611140  216074 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:40.611247  216074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:59:40.656282  216074 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:59:40.656413  216074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:40.752458  216074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:59:40.738300735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:40.752566  216074 docker.go:319] overlay module found
	I1123 08:59:40.756622  216074 out.go:179] * Using the docker driver based on user configuration
	I1123 08:59:40.759788  216074 start.go:309] selected driver: docker
	I1123 08:59:40.759810  216074 start.go:927] validating driver "docker" against <nil>
	I1123 08:59:40.759823  216074 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:59:40.760559  216074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:40.840879  216074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-23 08:59:40.831791559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:40.841036  216074 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:59:40.841265  216074 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:59:40.844487  216074 out.go:179] * Using Docker driver with root privileges
	I1123 08:59:40.847551  216074 cni.go:84] Creating CNI manager for ""
	I1123 08:59:40.847624  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:40.847640  216074 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:59:40.847726  216074 start.go:353] cluster config:
	{Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:40.850947  216074 out.go:179] * Starting "embed-certs-672503" primary control-plane node in "embed-certs-672503" cluster
	I1123 08:59:40.853960  216074 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:59:40.856924  216074 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:59:40.859875  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:40.859924  216074 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:59:40.859933  216074 cache.go:65] Caching tarball of preloaded images
	I1123 08:59:40.859968  216074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:59:40.860013  216074 preload.go:238] Found /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:59:40.860024  216074 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:59:40.860143  216074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json ...
	I1123 08:59:40.860163  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json: {Name:mkb81d39d58a71dac5e98d24c241cff9b78e273e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:40.879736  216074 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:59:40.879759  216074 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:59:40.879779  216074 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:59:40.879808  216074 start.go:360] acquireMachinesLock for embed-certs-672503: {Name:mk52b3d46d7a43264b4677c9fc6abfc0706853fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:59:40.879915  216074 start.go:364] duration metric: took 86.869µs to acquireMachinesLock for "embed-certs-672503"
	I1123 08:59:40.879944  216074 start.go:93] Provisioning new machine with config: &{Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:59:40.880019  216074 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:59:39.039954  214550 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-118762:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.007752645s)
	I1123 08:59:39.039991  214550 kic.go:203] duration metric: took 5.007913738s to extract preloaded images to volume ...
	W1123 08:59:39.040149  214550 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:39.040271  214550 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:39.103132  214550 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-118762 --name default-k8s-diff-port-118762 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-118762 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-118762 --network default-k8s-diff-port-118762 --ip 192.168.85.2 --volume default-k8s-diff-port-118762:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:39.606571  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Running}}
	I1123 08:59:39.652908  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:39.675600  214550 cli_runner.go:164] Run: docker exec default-k8s-diff-port-118762 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:39.805153  214550 oci.go:144] the created container "default-k8s-diff-port-118762" has a running status.
	I1123 08:59:39.805181  214550 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa...
	I1123 08:59:40.603002  214550 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:40.646836  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:40.670926  214550 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:40.670945  214550 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-118762 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:40.744487  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:40.770445  214550 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:40.770539  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:40.791316  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:40.791758  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:40.791772  214550 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:40.792437  214550 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51880->127.0.0.1:33064: read: connection reset by peer
	I1123 08:59:40.883578  216074 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:59:40.883819  216074 start.go:159] libmachine.API.Create for "embed-certs-672503" (driver="docker")
	I1123 08:59:40.883864  216074 client.go:173] LocalClient.Create starting
	I1123 08:59:40.883946  216074 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem
	I1123 08:59:40.883982  216074 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:40.884002  216074 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:40.884067  216074 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem
	I1123 08:59:40.884090  216074 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:40.884109  216074 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:40.884452  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:59:40.900264  216074 cli_runner.go:211] docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:59:40.900362  216074 network_create.go:284] running [docker network inspect embed-certs-672503] to gather additional debugging logs...
	I1123 08:59:40.900388  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503
	W1123 08:59:40.916918  216074 cli_runner.go:211] docker network inspect embed-certs-672503 returned with exit code 1
	I1123 08:59:40.916950  216074 network_create.go:287] error running [docker network inspect embed-certs-672503]: docker network inspect embed-certs-672503: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-672503 not found
	I1123 08:59:40.916965  216074 network_create.go:289] output of [docker network inspect embed-certs-672503]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-672503 not found
	
	** /stderr **
	I1123 08:59:40.917065  216074 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:40.933652  216074 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ab12b2c3b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:c9:6d:7b:80:76} reservation:<nil>}
	I1123 08:59:40.933989  216074 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7f5e4a52a57c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:1a:79:b2:02:66} reservation:<nil>}
	I1123 08:59:40.934307  216074 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed031858d624 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:47:7d:04:56:4a} reservation:<nil>}
	I1123 08:59:40.934717  216074 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7270}
	I1123 08:59:40.934741  216074 network_create.go:124] attempt to create docker network embed-certs-672503 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:59:40.934796  216074 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-672503 embed-certs-672503
	I1123 08:59:40.992310  216074 network_create.go:108] docker network embed-certs-672503 192.168.76.0/24 created
	I1123 08:59:40.992345  216074 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-672503" container
	I1123 08:59:40.992424  216074 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:59:41.010086  216074 cli_runner.go:164] Run: docker volume create embed-certs-672503 --label name.minikube.sigs.k8s.io=embed-certs-672503 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:59:41.028903  216074 oci.go:103] Successfully created a docker volume embed-certs-672503
	I1123 08:59:41.029006  216074 cli_runner.go:164] Run: docker run --rm --name embed-certs-672503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-672503 --entrypoint /usr/bin/test -v embed-certs-672503:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:59:41.597394  216074 oci.go:107] Successfully prepared a docker volume embed-certs-672503
	I1123 08:59:41.597456  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:41.597467  216074 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:59:41.597532  216074 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-672503:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:59:43.963549  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-118762
	
	I1123 08:59:43.963629  214550 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-118762"
	I1123 08:59:43.963730  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:43.982067  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:43.982376  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:43.982388  214550 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-118762 && echo "default-k8s-diff-port-118762" | sudo tee /etc/hostname
	I1123 08:59:44.162438  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-118762
	
	I1123 08:59:44.162524  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.184402  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:44.184717  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:44.184743  214550 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-118762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-118762/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-118762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:44.387688  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:44.387725  214550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:59:44.387751  214550 ubuntu.go:190] setting up certificates
	I1123 08:59:44.387761  214550 provision.go:84] configureAuth start
	I1123 08:59:44.387823  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.406977  214550 provision.go:143] copyHostCerts
	I1123 08:59:44.407043  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:59:44.407056  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:59:44.407135  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:59:44.407247  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:59:44.407259  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:59:44.407287  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:59:44.407420  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:59:44.407449  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:59:44.407501  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:59:44.407571  214550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-118762 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-118762 localhost minikube]
	I1123 08:59:44.485276  214550 provision.go:177] copyRemoteCerts
	I1123 08:59:44.485399  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:44.485475  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.502836  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.611676  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 08:59:44.631601  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:59:44.649182  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:59:44.666321  214550 provision.go:87] duration metric: took 278.533612ms to configureAuth
	I1123 08:59:44.666344  214550 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:44.666518  214550 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:44.666526  214550 machine.go:97] duration metric: took 3.896062717s to provisionDockerMachine
	I1123 08:59:44.666532  214550 client.go:176] duration metric: took 11.505696925s to LocalClient.Create
	I1123 08:59:44.666546  214550 start.go:167] duration metric: took 11.505763117s to libmachine.API.Create "default-k8s-diff-port-118762"
	I1123 08:59:44.666552  214550 start.go:293] postStartSetup for "default-k8s-diff-port-118762" (driver="docker")
	I1123 08:59:44.666561  214550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:44.666612  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:44.666651  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.683801  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.791506  214550 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:44.795326  214550 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:44.795375  214550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:44.795403  214550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:59:44.795479  214550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:59:44.795605  214550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:59:44.795716  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:44.804406  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:44.824224  214550 start.go:296] duration metric: took 157.657779ms for postStartSetup
	I1123 08:59:44.824627  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.842791  214550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/config.json ...
	I1123 08:59:44.845272  214550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:44.845334  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.870817  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.973574  214550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:44.978835  214550 start.go:128] duration metric: took 11.821803269s to createHost
	I1123 08:59:44.978859  214550 start.go:83] releasing machines lock for "default-k8s-diff-port-118762", held for 11.821970245s
	I1123 08:59:44.978934  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.996375  214550 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:44.996410  214550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:44.996429  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.997293  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:45.019323  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:45.019748  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:45.266005  214550 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:45.276798  214550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:45.286312  214550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:45.286509  214550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:45.400996  214550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:45.401066  214550 start.go:496] detecting cgroup driver to use...
	I1123 08:59:45.401106  214550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:45.401166  214550 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:59:45.416740  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:59:45.430174  214550 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:45.430277  214550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:45.449266  214550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:45.468575  214550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:45.593366  214550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:45.727407  214550 docker.go:234] disabling docker service ...
	I1123 08:59:45.727524  214550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:45.750566  214550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:45.763685  214550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:45.882473  214550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:46.015128  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:46.029863  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:46.051000  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:59:46.067292  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:59:46.081288  214550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:59:46.081404  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:59:46.100139  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:46.120619  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:59:46.133469  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:46.142574  214550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:46.152921  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:59:46.164064  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:59:46.173191  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:59:46.188341  214550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:46.201637  214550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:46.214012  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:46.386854  214550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:59:46.574017  214550 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:59:46.574082  214550 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:59:46.590863  214550 start.go:564] Will wait 60s for crictl version
	I1123 08:59:46.590924  214550 ssh_runner.go:195] Run: which crictl
	I1123 08:59:46.596219  214550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:46.641889  214550 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:59:46.641953  214550 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:46.715861  214550 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:46.799546  214550 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:59:46.802513  214550 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-118762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:46.830038  214550 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:46.834203  214550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:46.850678  214550 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-118762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:46.850809  214550 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:46.850885  214550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:46.899220  214550 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:46.899242  214550 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:59:46.899304  214550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:46.940637  214550 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:46.940658  214550 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:46.940666  214550 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 08:59:46.940760  214550 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-118762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:46.941123  214550 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:59:47.001942  214550 cni.go:84] Creating CNI manager for ""
	I1123 08:59:47.001962  214550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:47.001977  214550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:59:47.002000  214550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-118762 NodeName:default-k8s-diff-port-118762 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:47.002115  214550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-118762"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:47.002179  214550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:47.020644  214550 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:47.020704  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:47.037002  214550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 08:59:47.055802  214550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:47.079429  214550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1123 08:59:47.092521  214550 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:47.096917  214550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:47.106392  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:47.305463  214550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:47.337722  214550 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762 for IP: 192.168.85.2
	I1123 08:59:47.337739  214550 certs.go:195] generating shared ca certs ...
	I1123 08:59:47.337754  214550 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.337885  214550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:59:47.337928  214550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:59:47.337936  214550 certs.go:257] generating profile certs ...
	I1123 08:59:47.337988  214550 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key
	I1123 08:59:47.337997  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt with IP's: []
	I1123 08:59:47.952908  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt ...
	I1123 08:59:47.952991  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt: {Name:mkf95cd7f0813a939fc5a10b868018298b21adb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.953216  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key ...
	I1123 08:59:47.953254  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key: {Name:mkf9a2acc2c42bd0a0cf1a1f2787b6cd46ba4f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.953415  214550 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca
	I1123 08:59:47.953453  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:59:48.203697  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca ...
	I1123 08:59:48.203769  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca: {Name:mk05909547f3239afc9409b846b3fb486118a441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.203987  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca ...
	I1123 08:59:48.204023  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca: {Name:mkec035b62be2e775b2f0c85ff409f77aebf0a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.204156  214550 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt
	I1123 08:59:48.204271  214550 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key
	I1123 08:59:48.204380  214550 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key
	I1123 08:59:48.204418  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt with IP's: []
	I1123 08:59:48.359177  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt ...
	I1123 08:59:48.359211  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt: {Name:mkf91279fb6f4fe072e258fdea87868d2840f420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.359412  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key ...
	I1123 08:59:48.359429  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key: {Name:mkbf74023435808035706f9a2ad6638168a8a889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.359663  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:59:48.359708  214550 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:48.359723  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:48.359753  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:59:48.359783  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:48.359810  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:59:48.359858  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:48.360416  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:48.379912  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:59:48.398946  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:48.417150  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:59:48.434559  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 08:59:48.452066  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:59:48.470350  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:48.488326  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:59:48.506336  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:59:48.524422  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:59:48.541642  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:48.559509  214550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:48.572933  214550 ssh_runner.go:195] Run: openssl version
	I1123 08:59:48.579412  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:59:48.588035  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.591879  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.591946  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.633205  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:48.641796  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:48.650209  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.654132  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.654249  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.695982  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:48.704319  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:59:48.712849  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.716712  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.716781  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.757938  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:48.766377  214550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:48.769975  214550 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:48.770030  214550 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-118762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:48.770114  214550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:48.770174  214550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:48.795754  214550 cri.go:89] found id: ""
	I1123 08:59:48.795881  214550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:48.803757  214550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:48.811647  214550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:48.811743  214550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:48.819712  214550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:48.819733  214550 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:48.819805  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 08:59:48.827458  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:48.827560  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:48.835278  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 08:59:48.843241  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:48.843395  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:48.850790  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 08:59:48.859021  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:48.859145  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:48.866723  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 08:59:48.874202  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:48.874315  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:48.882081  214550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:48.932250  214550 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:48.932626  214550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:48.968464  214550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:48.968571  214550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:48.968634  214550 kubeadm.go:319] OS: Linux
	I1123 08:59:48.968710  214550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:48.968779  214550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:48.968852  214550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:48.968949  214550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:48.969029  214550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:48.969104  214550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:48.969191  214550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:48.969263  214550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:48.969334  214550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:49.039395  214550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:49.039547  214550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:49.039694  214550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:49.045139  214550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:46.061340  216074 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-672503:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.463759827s)
	I1123 08:59:46.061369  216074 kic.go:203] duration metric: took 4.463899193s to extract preloaded images to volume ...
	W1123 08:59:46.061515  216074 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:46.061700  216074 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:46.159063  216074 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-672503 --name embed-certs-672503 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-672503 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-672503 --network embed-certs-672503 --ip 192.168.76.2 --volume embed-certs-672503:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:46.530738  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Running}}
	I1123 08:59:46.558782  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:46.582800  216074 cli_runner.go:164] Run: docker exec embed-certs-672503 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:46.646806  216074 oci.go:144] the created container "embed-certs-672503" has a running status.
	I1123 08:59:46.646847  216074 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa...
	I1123 08:59:46.847783  216074 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:46.880288  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:46.917106  216074 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:46.917131  216074 kic_runner.go:114] Args: [docker exec --privileged embed-certs-672503 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:46.987070  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:47.019780  216074 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:47.019874  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:47.051570  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:47.051918  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:47.051935  216074 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:47.052575  216074 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:59:50.211545  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-672503
	
	I1123 08:59:50.211595  216074 ubuntu.go:182] provisioning hostname "embed-certs-672503"
	I1123 08:59:50.211673  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:50.237002  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:50.237319  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:50.237337  216074 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-672503 && echo "embed-certs-672503" | sudo tee /etc/hostname
	I1123 08:59:50.436539  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-672503
	
	I1123 08:59:50.436687  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:50.465709  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:50.466029  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:50.466045  216074 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672503' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672503/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672503' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:49.051452  214550 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:49.051585  214550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:49.051703  214550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:50.049674  214550 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:50.094855  214550 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:50.781521  214550 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:51.007002  214550 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:51.586516  214550 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:51.587407  214550 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-118762 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:52.294730  214550 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:52.295126  214550 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-118762 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:50.619868  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:50.619905  216074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:59:50.619926  216074 ubuntu.go:190] setting up certificates
	I1123 08:59:50.619937  216074 provision.go:84] configureAuth start
	I1123 08:59:50.620004  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:50.645393  216074 provision.go:143] copyHostCerts
	I1123 08:59:50.645466  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:59:50.645475  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:59:50.645553  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:59:50.645639  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:59:50.645644  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:59:50.645669  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:59:50.645724  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:59:50.645729  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:59:50.645751  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:59:50.645795  216074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672503 san=[127.0.0.1 192.168.76.2 embed-certs-672503 localhost minikube]
	I1123 08:59:51.127888  216074 provision.go:177] copyRemoteCerts
	I1123 08:59:51.127960  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:51.128004  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.153368  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.284623  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:59:51.314621  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:59:51.335720  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:59:51.355451  216074 provision.go:87] duration metric: took 735.481705ms to configureAuth
	I1123 08:59:51.355533  216074 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:51.355763  216074 config.go:182] Loaded profile config "embed-certs-672503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:51.355791  216074 machine.go:97] duration metric: took 4.335986452s to provisionDockerMachine
	I1123 08:59:51.355815  216074 client.go:176] duration metric: took 10.471938723s to LocalClient.Create
	I1123 08:59:51.355856  216074 start.go:167] duration metric: took 10.472037333s to libmachine.API.Create "embed-certs-672503"
	I1123 08:59:51.355949  216074 start.go:293] postStartSetup for "embed-certs-672503" (driver="docker")
	I1123 08:59:51.355976  216074 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:51.356061  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:51.356134  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.375632  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.492356  216074 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:51.496551  216074 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:51.496580  216074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:51.496592  216074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:59:51.496645  216074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:59:51.496721  216074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:59:51.496826  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:51.505195  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:51.525735  216074 start.go:296] duration metric: took 169.754775ms for postStartSetup
	I1123 08:59:51.526206  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:51.546243  216074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json ...
	I1123 08:59:51.546511  216074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:51.546553  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.568894  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.680931  216074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:51.686143  216074 start.go:128] duration metric: took 10.806110424s to createHost
	I1123 08:59:51.686171  216074 start.go:83] releasing machines lock for "embed-certs-672503", held for 10.806242996s
	I1123 08:59:51.686257  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:51.705486  216074 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:51.705573  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.705949  216074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:51.706024  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.760593  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.767588  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.883448  216074 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:51.991493  216074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:51.996626  216074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:51.996703  216074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:52.044663  216074 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:52.044689  216074 start.go:496] detecting cgroup driver to use...
	I1123 08:59:52.044721  216074 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:52.044781  216074 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:59:52.061494  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:59:52.076189  216074 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:52.076260  216074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:52.094291  216074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:52.114994  216074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:52.292895  216074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:52.481817  216074 docker.go:234] disabling docker service ...
	I1123 08:59:52.481931  216074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:52.508317  216074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:52.526364  216074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:52.700213  216074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:52.897094  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:52.915331  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:52.931211  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:59:52.946225  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:59:52.956101  216074 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:59:52.956226  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:59:52.965762  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:52.975341  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:59:52.985192  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:52.994955  216074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:53.010410  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:59:53.027207  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:59:53.042077  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:59:53.054424  216074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:53.063874  216074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:53.072557  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.226737  216074 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:59:53.443692  216074 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:59:53.443892  216074 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:59:53.448833  216074 start.go:564] Will wait 60s for crictl version
	I1123 08:59:53.448947  216074 ssh_runner.go:195] Run: which crictl
	I1123 08:59:53.453157  216074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:53.486128  216074 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:59:53.486258  216074 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:53.513131  216074 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:53.540090  216074 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:59:53.543140  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:53.564398  216074 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:53.569921  216074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:53.584791  216074 kubeadm.go:884] updating cluster {Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:53.584953  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:53.585060  216074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:53.625666  216074 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:53.625695  216074 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:59:53.625759  216074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:53.653757  216074 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:53.653781  216074 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:53.653789  216074 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 08:59:53.653881  216074 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-672503 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:53.653948  216074 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:59:53.696072  216074 cni.go:84] Creating CNI manager for ""
	I1123 08:59:53.696098  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:53.696113  216074 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:59:53.696140  216074 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672503 NodeName:embed-certs-672503 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:53.696260  216074 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-672503"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:53.696337  216074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:53.705716  216074 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:53.705795  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:53.718287  216074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 08:59:53.737046  216074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:53.760149  216074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1123 08:59:53.778487  216074 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:53.782565  216074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:53.792649  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.947067  216074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:53.969434  216074 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503 for IP: 192.168.76.2
	I1123 08:59:53.969452  216074 certs.go:195] generating shared ca certs ...
	I1123 08:59:53.969468  216074 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:53.969604  216074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:59:53.969644  216074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:59:53.969650  216074 certs.go:257] generating profile certs ...
	I1123 08:59:53.969704  216074 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key
	I1123 08:59:53.969718  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt with IP's: []
	I1123 08:59:54.209900  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt ...
	I1123 08:59:54.209965  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt: {Name:mk5c525ca71ddd2fe2c7f6b3ca8599f23905a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.210184  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key ...
	I1123 08:59:54.210197  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key: {Name:mk8943be44317db4dff6c1e7eaf6a19a57aa6c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.210284  216074 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae
	I1123 08:59:54.210296  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:59:54.801069  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae ...
	I1123 08:59:54.801096  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae: {Name:mk380799870e5ea7b7c67a4d865af58b1de5aef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.801278  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae ...
	I1123 08:59:54.801290  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae: {Name:mk102df1c6315a508518783bccf3cb2f81c38779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.801364  216074 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt
	I1123 08:59:54.801439  216074 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key
	I1123 08:59:54.801491  216074 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key
	I1123 08:59:54.801507  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt with IP's: []
	I1123 08:59:55.253694  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt ...
	I1123 08:59:55.253767  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt: {Name:mkdf06b6c921783e84858386a11a6aa335d63967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:55.253999  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key ...
	I1123 08:59:55.254013  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key: {Name:mk979f2bcf5527fe8ab1fb441ce8c10881831a69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:55.254199  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:59:55.254240  216074 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:55.254249  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:55.254277  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:59:55.254303  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:55.254368  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:59:55.254413  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:55.255001  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:55.275757  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:59:55.301850  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:55.327043  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:59:55.356120  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:59:55.379337  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:59:55.403251  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:55.432903  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:59:55.452955  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:55.477346  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:59:55.510351  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:59:55.531366  216074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:55.546185  216074 ssh_runner.go:195] Run: openssl version
	I1123 08:59:55.552895  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:59:55.562322  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.566546  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.566661  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.608819  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:55.617792  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:59:55.626621  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.631031  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.631147  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.673213  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:55.682467  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:55.691629  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.696005  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.696116  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.737391  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:55.746485  216074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:55.750669  216074 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:55.750779  216074 kubeadm.go:401] StartCluster: {Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:55.750882  216074 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:55.750971  216074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:55.781886  216074 cri.go:89] found id: ""
	I1123 08:59:55.782008  216074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:55.792128  216074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:55.801015  216074 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:55.801120  216074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:55.811498  216074 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:55.811567  216074 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:55.811651  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:59:55.820390  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:55.820489  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:55.828204  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:59:55.837261  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:55.837355  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:55.845286  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:59:55.854064  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:55.854174  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:55.861833  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:59:55.870496  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:55.870610  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:55.878638  216074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:55.935971  216074 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:55.937587  216074 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:56.004559  216074 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:56.004761  216074 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:56.004834  216074 kubeadm.go:319] OS: Linux
	I1123 08:59:56.004912  216074 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:56.004998  216074 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:56.005083  216074 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:56.005163  216074 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:56.005244  216074 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:56.005326  216074 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:56.005405  216074 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:56.005488  216074 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:56.005568  216074 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:56.119904  216074 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:56.120070  216074 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:56.120207  216074 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:56.130630  216074 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:54.179851  214550 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:59:55.466764  214550 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:55.672141  214550 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:59:55.672731  214550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:59:55.836881  214550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:59:56.018357  214550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:59:56.361926  214550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:59:56.873997  214550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:59:57.413691  214550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:59:57.414774  214550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:59:57.417706  214550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:59:57.421342  214550 out.go:252]   - Booting up control plane ...
	I1123 08:59:57.421437  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:59:57.426176  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:59:57.426253  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:59:57.445605  214550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:59:57.445714  214550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:59:57.456012  214550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:59:57.456111  214550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:59:57.456152  214550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:59:57.617060  214550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:59:57.617179  214550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:59:56.136350  216074 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:56.136541  216074 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:56.136667  216074 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:57.121922  216074 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:57.436901  216074 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:57.609063  216074 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:58.013484  216074 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:58.298959  216074 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:58.303729  216074 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-672503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:58.349481  216074 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:58.350030  216074 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-672503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:59.325836  216074 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:00:00.299809  216074 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:59.119693  214550 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500938234s
	I1123 08:59:59.122603  214550 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:59:59.122949  214550 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1123 08:59:59.123601  214550 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:59:59.124077  214550 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:00:00.879718  216074 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:00:00.879799  216074 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:00:01.122151  216074 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:00:03.397018  216074 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:00:05.387724  216074 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:00:05.691737  216074 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:00:06.099799  216074 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:00:06.099904  216074 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:00:06.107751  216074 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:00:03.716327  214550 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.591863015s
	I1123 09:00:09.442146  214550 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.317417042s
	I1123 09:00:09.630647  214550 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.507233792s
	I1123 09:00:09.661041  214550 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:09.696775  214550 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:09.724658  214550 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:09.725105  214550 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-118762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:09.789313  214550 kubeadm.go:319] [bootstrap-token] Using token: d97ou5.m8drvm11cz5qqhuf
	I1123 09:00:06.111147  216074 out.go:252]   - Booting up control plane ...
	I1123 09:00:06.111260  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:00:06.111338  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:00:06.111425  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:00:06.141906  216074 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:00:06.142016  216074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:00:06.152623  216074 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:00:06.152727  216074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:00:06.152767  216074 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:00:06.424623  216074 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:00:06.424743  216074 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:00:07.419394  216074 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001849125s
	I1123 09:00:07.422769  216074 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:00:07.422861  216074 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 09:00:07.423174  216074 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:00:07.423260  216074 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:00:09.792446  214550 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:09.792565  214550 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:09.822919  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:09.841947  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:09.852584  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:09.860084  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:09.867079  214550 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:10.041393  214550 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:10.492226  214550 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:11.049466  214550 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:11.050970  214550 kubeadm.go:319] 
	I1123 09:00:11.051044  214550 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:11.051049  214550 kubeadm.go:319] 
	I1123 09:00:11.051126  214550 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:11.051130  214550 kubeadm.go:319] 
	I1123 09:00:11.051155  214550 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:11.054107  214550 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:11.054173  214550 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:11.054178  214550 kubeadm.go:319] 
	I1123 09:00:11.054232  214550 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:11.054259  214550 kubeadm.go:319] 
	I1123 09:00:11.054308  214550 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:11.054312  214550 kubeadm.go:319] 
	I1123 09:00:11.054364  214550 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:11.054439  214550 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:11.054508  214550 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:11.054514  214550 kubeadm.go:319] 
	I1123 09:00:11.054918  214550 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:11.054999  214550 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:11.055003  214550 kubeadm.go:319] 
	I1123 09:00:11.055310  214550 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token d97ou5.m8drvm11cz5qqhuf \
	I1123 09:00:11.055433  214550 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 09:00:11.055653  214550 kubeadm.go:319] 	--control-plane 
	I1123 09:00:11.055662  214550 kubeadm.go:319] 
	I1123 09:00:11.056081  214550 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:11.056091  214550 kubeadm.go:319] 
	I1123 09:00:11.056374  214550 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token d97ou5.m8drvm11cz5qqhuf \
	I1123 09:00:11.056668  214550 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 09:00:11.065038  214550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:11.065464  214550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:11.065590  214550 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:11.065601  214550 cni.go:84] Creating CNI manager for ""
	I1123 09:00:11.065609  214550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:00:11.068935  214550 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:11.071817  214550 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:11.083987  214550 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:11.084065  214550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:11.157462  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:00:11.877723  214550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:00:11.877851  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:11.877919  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-118762 minikube.k8s.io/updated_at=2025_11_23T09_00_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=default-k8s-diff-port-118762 minikube.k8s.io/primary=true
	I1123 09:00:12.400645  214550 ops.go:34] apiserver oom_adj: -16
	I1123 09:00:12.400749  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.479703  216074 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.056359214s
	I1123 09:00:12.901058  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:13.400921  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:13.901348  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.400890  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.901622  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:15.401708  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:15.797055  214550 kubeadm.go:1114] duration metric: took 3.919248598s to wait for elevateKubeSystemPrivileges
	I1123 09:00:15.797081  214550 kubeadm.go:403] duration metric: took 27.027055323s to StartCluster
	I1123 09:00:15.797098  214550 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:15.797159  214550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:00:15.797780  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:15.797984  214550 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:00:15.798066  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:00:15.798303  214550 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:00:15.798340  214550 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:00:15.798395  214550 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-118762"
	I1123 09:00:15.798414  214550 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-118762"
	I1123 09:00:15.798437  214550 host.go:66] Checking if "default-k8s-diff-port-118762" exists ...
	I1123 09:00:15.798912  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.799494  214550 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-118762"
	I1123 09:00:15.799518  214550 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-118762"
	I1123 09:00:15.799812  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.802617  214550 out.go:179] * Verifying Kubernetes components...
	I1123 09:00:15.805826  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:15.840681  214550 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-118762"
	I1123 09:00:15.840730  214550 host.go:66] Checking if "default-k8s-diff-port-118762" exists ...
	I1123 09:00:15.841178  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.841365  214550 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:00:15.845719  214550 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:15.845739  214550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:00:15.845799  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 09:00:15.885107  214550 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:15.885129  214550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:00:15.885196  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 09:00:15.885424  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 09:00:15.922980  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 09:00:16.516094  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:00:16.516301  214550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:16.565568  214550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:16.660294  214550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:17.770086  214550 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.253733356s)
	I1123 09:00:17.770803  214550 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-118762" to be "Ready" ...
	I1123 09:00:17.771113  214550 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.254946263s)
	I1123 09:00:17.771140  214550 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 09:00:18.288784  214550 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-118762" context rescaled to 1 replicas
	I1123 09:00:18.294378  214550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.634044217s)
	I1123 09:00:18.294508  214550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.728864491s)
	I1123 09:00:18.313019  214550 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 09:00:18.174934  216074 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.752142419s
	I1123 09:00:18.924553  216074 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.501560337s
	I1123 09:00:18.944911  216074 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:18.969340  216074 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:18.982694  216074 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:18.982935  216074 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-672503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:18.996135  216074 kubeadm.go:319] [bootstrap-token] Using token: n9250s.xdwmypsz1r225um6
	I1123 09:00:18.999202  216074 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:18.999323  216074 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:19.010682  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:19.023889  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:19.027010  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:19.034948  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:19.039786  216074 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:19.331973  216074 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:19.770619  216074 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:20.331084  216074 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:20.332385  216074 kubeadm.go:319] 
	I1123 09:00:20.332460  216074 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:20.332472  216074 kubeadm.go:319] 
	I1123 09:00:20.332550  216074 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:20.332554  216074 kubeadm.go:319] 
	I1123 09:00:20.332585  216074 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:20.332649  216074 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:20.332706  216074 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:20.332714  216074 kubeadm.go:319] 
	I1123 09:00:20.332768  216074 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:20.332775  216074 kubeadm.go:319] 
	I1123 09:00:20.332826  216074 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:20.332834  216074 kubeadm.go:319] 
	I1123 09:00:20.332886  216074 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:20.332964  216074 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:20.333036  216074 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:20.333044  216074 kubeadm.go:319] 
	I1123 09:00:20.333141  216074 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:20.333222  216074 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:20.333230  216074 kubeadm.go:319] 
	I1123 09:00:20.333314  216074 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n9250s.xdwmypsz1r225um6 \
	I1123 09:00:20.333421  216074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 09:00:20.333454  216074 kubeadm.go:319] 	--control-plane 
	I1123 09:00:20.333461  216074 kubeadm.go:319] 
	I1123 09:00:20.333554  216074 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:20.333574  216074 kubeadm.go:319] 
	I1123 09:00:20.333657  216074 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n9250s.xdwmypsz1r225um6 \
	I1123 09:00:20.333764  216074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 09:00:20.339187  216074 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:20.339460  216074 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:20.339572  216074 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:20.339594  216074 cni.go:84] Creating CNI manager for ""
	I1123 09:00:20.339606  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:00:20.342914  216074 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:20.345744  216074 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:20.350352  216074 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:20.350371  216074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:20.365062  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:00:18.315850  214550 addons.go:530] duration metric: took 2.517504837s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 09:00:19.773873  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:21.774051  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:20.682862  216074 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:00:20.683008  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:20.683107  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-672503 minikube.k8s.io/updated_at=2025_11_23T09_00_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=embed-certs-672503 minikube.k8s.io/primary=true
	I1123 09:00:20.861424  216074 ops.go:34] apiserver oom_adj: -16
	I1123 09:00:20.881440  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:21.382484  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:21.881564  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:22.381797  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:22.881698  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:23.382044  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:23.881478  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:24.381553  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:24.882135  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:25.085445  216074 kubeadm.go:1114] duration metric: took 4.402483472s to wait for elevateKubeSystemPrivileges
	I1123 09:00:25.085479  216074 kubeadm.go:403] duration metric: took 29.334704925s to StartCluster
	I1123 09:00:25.085499  216074 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:25.085586  216074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:00:25.087626  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:25.087936  216074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:00:25.088691  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:00:25.089017  216074 config.go:182] Loaded profile config "embed-certs-672503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:00:25.089061  216074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:00:25.089133  216074 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-672503"
	I1123 09:00:25.089153  216074 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-672503"
	I1123 09:00:25.089179  216074 host.go:66] Checking if "embed-certs-672503" exists ...
	I1123 09:00:25.089653  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.090352  216074 addons.go:70] Setting default-storageclass=true in profile "embed-certs-672503"
	I1123 09:00:25.090381  216074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672503"
	I1123 09:00:25.090715  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.093412  216074 out.go:179] * Verifying Kubernetes components...
	I1123 09:00:25.100650  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:25.132922  216074 addons.go:239] Setting addon default-storageclass=true in "embed-certs-672503"
	I1123 09:00:25.132970  216074 host.go:66] Checking if "embed-certs-672503" exists ...
	I1123 09:00:25.133464  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.134451  216074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:00:25.137634  216074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:25.137660  216074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:00:25.137734  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 09:00:25.175531  216074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:25.175555  216074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:00:25.175631  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 09:00:25.190357  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 09:00:25.214325  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 09:00:25.395679  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:00:25.445659  216074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:25.568912  216074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:25.606764  216074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:26.047827  216074 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 09:00:26.050542  216074 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672503" to be "Ready" ...
	I1123 09:00:26.465272  216074 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1123 09:00:23.774226  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:26.274269  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:26.468271  216074 addons.go:530] duration metric: took 1.379204566s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 09:00:26.552103  216074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-672503" context rescaled to 1 replicas
	W1123 09:00:28.054477  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:30.054656  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:28.774465  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:30.774882  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:32.553443  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:35.054660  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:33.274428  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:35.774260  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:37.554121  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:40.055622  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:38.273771  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:40.773644  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:42.553668  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:44.553840  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:43.273604  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:45.275951  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:47.773735  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:47.054612  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:49.553846  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:49.774526  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:52.273699  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:51.554200  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:54.053723  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:54.274489  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:56.773822  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:57.776587  214550 node_ready.go:49] node "default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:57.776614  214550 node_ready.go:38] duration metric: took 40.005787911s for node "default-k8s-diff-port-118762" to be "Ready" ...
	I1123 09:00:57.776628  214550 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:00:57.776688  214550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:00:57.792566  214550 api_server.go:72] duration metric: took 41.994554549s to wait for apiserver process to appear ...
	I1123 09:00:57.792589  214550 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:00:57.792608  214550 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 09:00:57.801332  214550 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 09:00:57.802591  214550 api_server.go:141] control plane version: v1.34.1
	I1123 09:00:57.802671  214550 api_server.go:131] duration metric: took 10.074405ms to wait for apiserver health ...
	I1123 09:00:57.802696  214550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:00:57.806165  214550 system_pods.go:59] 8 kube-system pods found
	I1123 09:00:57.806249  214550 system_pods.go:61] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:57.806272  214550 system_pods.go:61] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:57.806312  214550 system_pods.go:61] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:57.806336  214550 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:57.806359  214550 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:57.806397  214550 system_pods.go:61] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:57.806420  214550 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:57.806446  214550 system_pods.go:61] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:57.806485  214550 system_pods.go:74] duration metric: took 3.749386ms to wait for pod list to return data ...
	I1123 09:00:57.806513  214550 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:00:57.809265  214550 default_sa.go:45] found service account: "default"
	I1123 09:00:57.809285  214550 default_sa.go:55] duration metric: took 2.751519ms for default service account to be created ...
	I1123 09:00:57.809298  214550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:00:57.811926  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:57.811955  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:57.811962  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:57.811968  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:57.811972  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:57.811977  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:57.811980  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:57.811984  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:57.811991  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:57.812009  214550 retry.go:31] will retry after 274.029839ms: missing components: kube-dns
	I1123 09:00:58.095441  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:58.095474  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:58.095481  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:58.095487  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:58.095491  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:58.095497  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:58.095502  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:58.095506  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:58.095511  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:58.095526  214550 retry.go:31] will retry after 259.858354ms: missing components: kube-dns
	I1123 09:00:58.359494  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:58.359527  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Running
	I1123 09:00:58.359536  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:58.359542  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:58.359546  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:58.359551  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:58.359556  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:58.359560  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:58.359564  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Running
	I1123 09:00:58.359572  214550 system_pods.go:126] duration metric: took 550.268629ms to wait for k8s-apps to be running ...
	I1123 09:00:58.359583  214550 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:00:58.359641  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:58.373607  214550 system_svc.go:56] duration metric: took 14.015669ms WaitForService to wait for kubelet
	I1123 09:00:58.373638  214550 kubeadm.go:587] duration metric: took 42.575629379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:58.373657  214550 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:00:58.376361  214550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:00:58.376394  214550 node_conditions.go:123] node cpu capacity is 2
	I1123 09:00:58.376408  214550 node_conditions.go:105] duration metric: took 2.746055ms to run NodePressure ...
	I1123 09:00:58.376419  214550 start.go:242] waiting for startup goroutines ...
	I1123 09:00:58.376427  214550 start.go:247] waiting for cluster config update ...
	I1123 09:00:58.376438  214550 start.go:256] writing updated cluster config ...
	I1123 09:00:58.376721  214550 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:58.380292  214550 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:58.385153  214550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r5snd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.390028  214550 pod_ready.go:94] pod "coredns-66bc5c9577-r5snd" is "Ready"
	I1123 09:00:58.390067  214550 pod_ready.go:86] duration metric: took 4.884639ms for pod "coredns-66bc5c9577-r5snd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.392315  214550 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.396380  214550 pod_ready.go:94] pod "etcd-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.396450  214550 pod_ready.go:86] duration metric: took 4.109265ms for pod "etcd-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.398716  214550 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.403219  214550 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.403254  214550 pod_ready.go:86] duration metric: took 4.51516ms for pod "kube-apiserver-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.405723  214550 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.785140  214550 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.785167  214550 pod_ready.go:86] duration metric: took 379.369705ms for pod "kube-controller-manager-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.985264  214550 pod_ready.go:83] waiting for pod "kube-proxy-fwc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.387683  214550 pod_ready.go:94] pod "kube-proxy-fwc9v" is "Ready"
	I1123 09:00:59.387712  214550 pod_ready.go:86] duration metric: took 402.417123ms for pod "kube-proxy-fwc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.588360  214550 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.985884  214550 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:59.985910  214550 pod_ready.go:86] duration metric: took 397.484705ms for pod "kube-scheduler-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.985924  214550 pod_ready.go:40] duration metric: took 1.605599928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:00.360876  214550 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:00.365235  214550 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-118762" cluster and "default" namespace by default
	W1123 09:00:56.054171  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:58.059777  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:00.201612  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:02.554079  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:05.054145  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	I1123 09:01:06.553619  216074 node_ready.go:49] node "embed-certs-672503" is "Ready"
	I1123 09:01:06.553653  216074 node_ready.go:38] duration metric: took 40.503031578s for node "embed-certs-672503" to be "Ready" ...
	I1123 09:01:06.553667  216074 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:01:06.553728  216074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:01:06.566313  216074 api_server.go:72] duration metric: took 41.478343311s to wait for apiserver process to appear ...
	I1123 09:01:06.566341  216074 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:01:06.566374  216074 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:01:06.574435  216074 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:01:06.575998  216074 api_server.go:141] control plane version: v1.34.1
	I1123 09:01:06.576024  216074 api_server.go:131] duration metric: took 9.676749ms to wait for apiserver health ...
	I1123 09:01:06.576034  216074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:01:06.579331  216074 system_pods.go:59] 8 kube-system pods found
	I1123 09:01:06.579491  216074 system_pods.go:61] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.579500  216074 system_pods.go:61] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.579506  216074 system_pods.go:61] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.579511  216074 system_pods.go:61] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.579516  216074 system_pods.go:61] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.579524  216074 system_pods.go:61] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.579529  216074 system_pods.go:61] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.579541  216074 system_pods.go:61] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.579548  216074 system_pods.go:74] duration metric: took 3.508309ms to wait for pod list to return data ...
	I1123 09:01:06.579562  216074 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:01:06.584140  216074 default_sa.go:45] found service account: "default"
	I1123 09:01:06.584219  216074 default_sa.go:55] duration metric: took 4.649963ms for default service account to be created ...
	I1123 09:01:06.584244  216074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:01:06.587869  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:06.587906  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.587913  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.587919  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.587923  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.587929  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.587933  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.587938  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.587945  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.587968  216074 retry.go:31] will retry after 247.424175ms: missing components: kube-dns
	I1123 09:01:06.841170  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:06.841208  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.841215  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.841222  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.841227  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.841232  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.841237  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.841241  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.841246  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.841262  216074 retry.go:31] will retry after 283.378756ms: missing components: kube-dns
	I1123 09:01:07.129581  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.129666  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.129688  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.129732  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.129759  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.129784  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.129819  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.129847  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.129870  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.129915  216074 retry.go:31] will retry after 365.111173ms: missing components: kube-dns
	I1123 09:01:07.499321  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.499446  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.499463  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.499471  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.499475  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.499500  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.499508  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.499546  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.499559  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.499580  216074 retry.go:31] will retry after 378.113017ms: missing components: kube-dns
	I1123 09:01:07.881489  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.881535  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.881542  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.881549  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.881554  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.881559  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.881562  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.881566  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.881570  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.881588  216074 retry.go:31] will retry after 690.773315ms: missing components: kube-dns
	I1123 09:01:08.576591  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:08.576623  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Running
	I1123 09:01:08.576630  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:08.576635  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:08.576657  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:08.576662  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:08.576666  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:08.576671  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:08.576676  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:08.576687  216074 system_pods.go:126] duration metric: took 1.992424101s to wait for k8s-apps to be running ...
	I1123 09:01:08.576700  216074 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:01:08.576756  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:01:08.591468  216074 system_svc.go:56] duration metric: took 14.759167ms WaitForService to wait for kubelet
	I1123 09:01:08.591497  216074 kubeadm.go:587] duration metric: took 43.503532438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:01:08.591516  216074 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:01:08.594570  216074 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:01:08.594606  216074 node_conditions.go:123] node cpu capacity is 2
	I1123 09:01:08.594621  216074 node_conditions.go:105] duration metric: took 3.099272ms to run NodePressure ...
	I1123 09:01:08.594634  216074 start.go:242] waiting for startup goroutines ...
	I1123 09:01:08.594642  216074 start.go:247] waiting for cluster config update ...
	I1123 09:01:08.594654  216074 start.go:256] writing updated cluster config ...
	I1123 09:01:08.594942  216074 ssh_runner.go:195] Run: rm -f paused
	I1123 09:01:08.598542  216074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:08.602701  216074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nhnbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.608070  216074 pod_ready.go:94] pod "coredns-66bc5c9577-nhnbc" is "Ready"
	I1123 09:01:08.608097  216074 pod_ready.go:86] duration metric: took 5.358349ms for pod "coredns-66bc5c9577-nhnbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.610514  216074 pod_ready.go:83] waiting for pod "etcd-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.615555  216074 pod_ready.go:94] pod "etcd-embed-certs-672503" is "Ready"
	I1123 09:01:08.615582  216074 pod_ready.go:86] duration metric: took 5.042688ms for pod "etcd-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.618015  216074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.624626  216074 pod_ready.go:94] pod "kube-apiserver-embed-certs-672503" is "Ready"
	I1123 09:01:08.624654  216074 pod_ready.go:86] duration metric: took 6.607794ms for pod "kube-apiserver-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.632607  216074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.003276  216074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-672503" is "Ready"
	I1123 09:01:09.003305  216074 pod_ready.go:86] duration metric: took 370.669957ms for pod "kube-controller-manager-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.204229  216074 pod_ready.go:83] waiting for pod "kube-proxy-wbnjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.603471  216074 pod_ready.go:94] pod "kube-proxy-wbnjd" is "Ready"
	I1123 09:01:09.603500  216074 pod_ready.go:86] duration metric: took 399.242725ms for pod "kube-proxy-wbnjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.802674  216074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:10.203777  216074 pod_ready.go:94] pod "kube-scheduler-embed-certs-672503" is "Ready"
	I1123 09:01:10.203816  216074 pod_ready.go:86] duration metric: took 401.074978ms for pod "kube-scheduler-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:10.203830  216074 pod_ready.go:40] duration metric: took 1.605254448s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:10.258134  216074 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:10.261593  216074 out.go:179] * Done! kubectl is now configured to use "embed-certs-672503" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	ee485d2d85455       1611cd07b61d5       10 seconds ago       Running             busybox                   0                   6a938610ed5dc       busybox                                                default
	0ba4019410979       ba04bb24b9575       16 seconds ago       Running             storage-provisioner       0                   5020fccfe224f       storage-provisioner                                    kube-system
	70910ddc2313a       138784d87c9c5       16 seconds ago       Running             coredns                   0                   44978605b7387       coredns-66bc5c9577-r5snd                               kube-system
	cf43bad326873       b1a8c6f707935       57 seconds ago       Running             kindnet-cni               0                   77874024967df       kindnet-6vk7l                                          kube-system
	bc14f8da099ba       05baa95f5142d       57 seconds ago       Running             kube-proxy                0                   8b9d1b836c808       kube-proxy-fwc9v                                       kube-system
	09ad8e6abf33a       a1894772a478e       About a minute ago   Running             etcd                      0                   32ba499f97a91       etcd-default-k8s-diff-port-118762                      kube-system
	bd51fcd97f080       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   446c99929758b       kube-controller-manager-default-k8s-diff-port-118762   kube-system
	7cf9d65a2dbbc       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   dbcd52b3e1ed4       kube-scheduler-default-k8s-diff-port-118762            kube-system
	e44571e8430b7       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   618698cf61b6c       kube-apiserver-default-k8s-diff-port-118762            kube-system
	
	
	==> containerd <==
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.907568691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:d0fab715-c08e-4a99-a6ba-4b4837f47aaf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5020fccfe224fa41e7c5a4304f87ac89370f5441ed25ff7f66648f1e73d92228\""
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.909752658Z" level=info msg="connecting to shim 70910ddc2313a5e0c777904ee33fd767a89765f0c9caba6cae5f963668afc2ab" address="unix:///run/containerd/s/9941077eab7d87f0200db9d032b8f718ab3cdf55a4ba3c0ed51644876741436b" protocol=ttrpc version=3
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.920596201Z" level=info msg="CreateContainer within sandbox \"5020fccfe224fa41e7c5a4304f87ac89370f5441ed25ff7f66648f1e73d92228\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.933121561Z" level=info msg="Container 0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.948690578Z" level=info msg="CreateContainer within sandbox \"5020fccfe224fa41e7c5a4304f87ac89370f5441ed25ff7f66648f1e73d92228\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd\""
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.954433988Z" level=info msg="StartContainer for \"0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd\""
	Nov 23 09:00:57 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:57.959736394Z" level=info msg="connecting to shim 0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd" address="unix:///run/containerd/s/8ae5e588359c9237fc4bcb667c3a9546bb9504eae85b4510bb8051af51ef3f9f" protocol=ttrpc version=3
	Nov 23 09:00:58 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:58.011881271Z" level=info msg="StartContainer for \"70910ddc2313a5e0c777904ee33fd767a89765f0c9caba6cae5f963668afc2ab\" returns successfully"
	Nov 23 09:00:58 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:00:58.038380970Z" level=info msg="StartContainer for \"0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd\" returns successfully"
	Nov 23 09:01:01 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:01.021979558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:5c2314ab-27c6-4441-889f-af501dd53560,Namespace:default,Attempt:0,}"
	Nov 23 09:01:01 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:01.075803931Z" level=info msg="connecting to shim 6a938610ed5dcd0514f1594e43a0d209ef36d2162909060ca02239208fafea68" address="unix:///run/containerd/s/914f2ffdf4df9e947b743ded4ade49c0ce040d8ecec6d4d1c7f9f93cc6578315" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:01:01 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:01.142056966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:5c2314ab-27c6-4441-889f-af501dd53560,Namespace:default,Attempt:0,} returns sandbox id \"6a938610ed5dcd0514f1594e43a0d209ef36d2162909060ca02239208fafea68\""
	Nov 23 09:01:01 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:01.144729109Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.378342421Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.380249258Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937191"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.382738704Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.386861893Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.387334299Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.242556319s"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.387415374Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.395650955Z" level=info msg="CreateContainer within sandbox \"6a938610ed5dcd0514f1594e43a0d209ef36d2162909060ca02239208fafea68\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.408565307Z" level=info msg="Container ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.419031198Z" level=info msg="CreateContainer within sandbox \"6a938610ed5dcd0514f1594e43a0d209ef36d2162909060ca02239208fafea68\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64\""
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.420053491Z" level=info msg="StartContainer for \"ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64\""
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.420978002Z" level=info msg="connecting to shim ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64" address="unix:///run/containerd/s/914f2ffdf4df9e947b743ded4ade49c0ce040d8ecec6d4d1c7f9f93cc6578315" protocol=ttrpc version=3
	Nov 23 09:01:03 default-k8s-diff-port-118762 containerd[759]: time="2025-11-23T09:01:03.476406721Z" level=info msg="StartContainer for \"ee485d2d854557dedb00ed54d6f67e301df9bb100e2b42c12d0e5a3a38dfdb64\" returns successfully"
	
	
	==> coredns [70910ddc2313a5e0c777904ee33fd767a89765f0c9caba6cae5f963668afc2ab] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44118 - 742 "HINFO IN 592143518793182462.6728500283451617551. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.016244033s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-118762
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-118762
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-118762
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_00_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:00:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-118762
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:01:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:01:13 +0000   Sun, 23 Nov 2025 09:00:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:01:13 +0000   Sun, 23 Nov 2025 09:00:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:01:13 +0000   Sun, 23 Nov 2025 09:00:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:01:13 +0000   Sun, 23 Nov 2025 09:00:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-118762
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                2cb290fb-8655-472e-b198-65084610e8db
	  Boot ID:                    86d8501c-1df5-4d7e-90cb-d9ad951202c5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-r5snd                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59s
	  kube-system                 etcd-default-k8s-diff-port-118762                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-6vk7l                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      59s
	  kube-system                 kube-apiserver-default-k8s-diff-port-118762             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-118762    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-fwc9v                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-default-k8s-diff-port-118762             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 56s                kube-proxy       
	  Normal   NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 76s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 76s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  75s (x8 over 76s)  kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     75s (x7 over 76s)  kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    75s (x8 over 76s)  kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  63s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node default-k8s-diff-port-118762 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                node-controller  Node default-k8s-diff-port-118762 event: Registered Node default-k8s-diff-port-118762 in Controller
	  Normal   NodeReady                17s                kubelet          Node default-k8s-diff-port-118762 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014670] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505841] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033008] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738583] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.057424] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:26] hrtimer: interrupt took 58442338 ns
	
	
	==> etcd [09ad8e6abf33a65f71b353c02b9db597ae8f1ce72e3af1ef89165c0123b77e26] <==
	{"level":"warn","ts":"2025-11-23T09:00:04.385820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.403998Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.465382Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57740","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.472971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.505281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.538229Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.558054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.581810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.623846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.641989Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.685344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.703575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.739608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.767858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.803454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.823846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.851452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.880249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.900620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.953695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:04.970163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:05.005077Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:05.024931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:05.049222Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58134","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:05.207649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58148","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:01:14 up  1:43,  0 user,  load average: 2.66, 3.47, 2.96
	Linux default-k8s-diff-port-118762 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [cf43bad32687302a32aff514643f251ad92d683a18f4ad0a7bc50bf5789f2ea2] <==
	I1123 09:00:17.131135       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:00:17.131405       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:00:17.131539       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:00:17.131552       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:00:17.131565       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:00:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:00:17.362314       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:00:17.362334       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:00:17.362343       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:00:17.362642       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:00:47.359794       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:00:47.363426       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:00:47.363434       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 09:00:47.363584       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 09:00:48.862638       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:00:48.862747       1 metrics.go:72] Registering metrics
	I1123 09:00:48.862838       1 controller.go:711] "Syncing nftables rules"
	I1123 09:00:57.365004       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:00:57.365046       1 main.go:301] handling current node
	I1123 09:01:07.361433       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:01:07.361478       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e44571e8430b7b63843bce11f9d3695233d4db2d003a5243d4835a53b1578eb7] <==
	I1123 09:00:06.964288       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:00:06.964465       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:00:06.972767       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:00:06.984393       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:00:06.984569       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:00:06.985925       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:06.987053       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:00:07.382156       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:00:07.406558       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:00:07.406753       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:00:08.845585       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:00:09.081019       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:00:09.332957       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:00:09.377082       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 09:00:09.378583       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:00:09.393243       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:00:09.504263       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:00:10.457185       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:00:10.485950       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:00:10.502730       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:00:14.778660       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:14.785690       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:15.246256       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:00:15.544810       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:01:10.896098       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:58142: use of closed network connection
	
	
	==> kube-controller-manager [bd51fcd97f080424304216ba2d43e32e3983e2704297754815c3137df1a04a3b] <==
	I1123 09:00:14.586170       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:00:14.591567       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:00:14.591919       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:00:14.592096       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:00:14.592207       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:00:14.592728       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:00:14.591685       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:00:14.593056       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:00:14.593257       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-118762"
	I1123 09:00:14.593377       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 09:00:14.591701       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:00:14.596291       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:00:14.606984       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:00:14.607316       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:00:14.607448       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:00:14.607572       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:00:14.591507       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:00:14.610289       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:14.610309       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:14.619716       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:14.619927       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:00:14.619941       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:00:14.620014       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:00:14.621424       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:00:59.597721       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bc14f8da099ba3492358f197cf0623d7d6ca4a0ef5346cdd263dd0dfa657c208] <==
	I1123 09:00:17.169783       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:00:17.399059       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:00:17.519171       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:00:17.519207       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:00:17.519288       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:00:17.657921       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:00:17.657987       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:00:17.670728       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:00:17.671081       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:00:17.671103       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:17.672690       1 config.go:200] "Starting service config controller"
	I1123 09:00:17.672715       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:00:17.672734       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:00:17.672738       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:00:17.672748       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:00:17.672752       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:00:17.673620       1 config.go:309] "Starting node config controller"
	I1123 09:00:17.673634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:00:17.673641       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:00:17.773079       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:00:17.773125       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:00:17.773177       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7cf9d65a2dbbc14a2fb50e2921407c3f809339e7f9aac648cde3f0fe0c231ff1] <==
	I1123 09:00:06.085648       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:00:09.385214       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:00:09.385330       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:09.395529       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:00:09.395763       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:00:09.395917       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:09.396014       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:09.396169       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:09.396303       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:09.399952       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:00:09.400054       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:00:09.497820       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:00:09.497895       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 09:00:09.498023       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:00:11 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:11.711288    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-118762" podStartSLOduration=0.711269115 podStartE2EDuration="711.269115ms" podCreationTimestamp="2025-11-23 09:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:11.678666383 +0000 UTC m=+1.292032469" watchObservedRunningTime="2025-11-23 09:00:11.711269115 +0000 UTC m=+1.324635177"
	Nov 23 09:00:11 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:11.741834    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-118762" podStartSLOduration=0.741814865 podStartE2EDuration="741.814865ms" podCreationTimestamp="2025-11-23 09:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:11.715859072 +0000 UTC m=+1.329225125" watchObservedRunningTime="2025-11-23 09:00:11.741814865 +0000 UTC m=+1.355180927"
	Nov 23 09:00:11 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:11.775224    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-118762" podStartSLOduration=0.775204803 podStartE2EDuration="775.204803ms" podCreationTimestamp="2025-11-23 09:00:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:11.742224665 +0000 UTC m=+1.355590727" watchObservedRunningTime="2025-11-23 09:00:11.775204803 +0000 UTC m=+1.388570857"
	Nov 23 09:00:14 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:14.599097    1469 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:00:14 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:14.599938    1469 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758161    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f86gv\" (UniqueName: \"kubernetes.io/projected/d4b1b360-1ad9-4d21-bf09-34d8328640f7-kube-api-access-f86gv\") pod \"kube-proxy-fwc9v\" (UID: \"d4b1b360-1ad9-4d21-bf09-34d8328640f7\") " pod="kube-system/kube-proxy-fwc9v"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758675    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d4b1b360-1ad9-4d21-bf09-34d8328640f7-lib-modules\") pod \"kube-proxy-fwc9v\" (UID: \"d4b1b360-1ad9-4d21-bf09-34d8328640f7\") " pod="kube-system/kube-proxy-fwc9v"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758797    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/110880c9-bd5d-4589-b067-2b1f1168fa0c-cni-cfg\") pod \"kindnet-6vk7l\" (UID: \"110880c9-bd5d-4589-b067-2b1f1168fa0c\") " pod="kube-system/kindnet-6vk7l"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758894    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/110880c9-bd5d-4589-b067-2b1f1168fa0c-xtables-lock\") pod \"kindnet-6vk7l\" (UID: \"110880c9-bd5d-4589-b067-2b1f1168fa0c\") " pod="kube-system/kindnet-6vk7l"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.758999    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92wmn\" (UniqueName: \"kubernetes.io/projected/110880c9-bd5d-4589-b067-2b1f1168fa0c-kube-api-access-92wmn\") pod \"kindnet-6vk7l\" (UID: \"110880c9-bd5d-4589-b067-2b1f1168fa0c\") " pod="kube-system/kindnet-6vk7l"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.759093    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d4b1b360-1ad9-4d21-bf09-34d8328640f7-xtables-lock\") pod \"kube-proxy-fwc9v\" (UID: \"d4b1b360-1ad9-4d21-bf09-34d8328640f7\") " pod="kube-system/kube-proxy-fwc9v"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.759181    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/110880c9-bd5d-4589-b067-2b1f1168fa0c-lib-modules\") pod \"kindnet-6vk7l\" (UID: \"110880c9-bd5d-4589-b067-2b1f1168fa0c\") " pod="kube-system/kindnet-6vk7l"
	Nov 23 09:00:15 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:15.759281    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d4b1b360-1ad9-4d21-bf09-34d8328640f7-kube-proxy\") pod \"kube-proxy-fwc9v\" (UID: \"d4b1b360-1ad9-4d21-bf09-34d8328640f7\") " pod="kube-system/kube-proxy-fwc9v"
	Nov 23 09:00:16 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:16.032336    1469 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:00:18 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:18.101381    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fwc9v" podStartSLOduration=3.101361107 podStartE2EDuration="3.101361107s" podCreationTimestamp="2025-11-23 09:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:18.036753674 +0000 UTC m=+7.650119736" watchObservedRunningTime="2025-11-23 09:00:18.101361107 +0000 UTC m=+7.714727161"
	Nov 23 09:00:20 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:20.804650    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-6vk7l" podStartSLOduration=5.804629121 podStartE2EDuration="5.804629121s" podCreationTimestamp="2025-11-23 09:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:18.168279023 +0000 UTC m=+7.781645093" watchObservedRunningTime="2025-11-23 09:00:20.804629121 +0000 UTC m=+10.417995183"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.371436    1469 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.426359    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cacf6afe-5fee-4f94-8eb9-c7c24526cf27-config-volume\") pod \"coredns-66bc5c9577-r5snd\" (UID: \"cacf6afe-5fee-4f94-8eb9-c7c24526cf27\") " pod="kube-system/coredns-66bc5c9577-r5snd"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.426437    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rs5fl\" (UniqueName: \"kubernetes.io/projected/cacf6afe-5fee-4f94-8eb9-c7c24526cf27-kube-api-access-rs5fl\") pod \"coredns-66bc5c9577-r5snd\" (UID: \"cacf6afe-5fee-4f94-8eb9-c7c24526cf27\") " pod="kube-system/coredns-66bc5c9577-r5snd"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.527252    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxcm\" (UniqueName: \"kubernetes.io/projected/d0fab715-c08e-4a99-a6ba-4b4837f47aaf-kube-api-access-sjxcm\") pod \"storage-provisioner\" (UID: \"d0fab715-c08e-4a99-a6ba-4b4837f47aaf\") " pod="kube-system/storage-provisioner"
	Nov 23 09:00:57 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:57.527318    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d0fab715-c08e-4a99-a6ba-4b4837f47aaf-tmp\") pod \"storage-provisioner\" (UID: \"d0fab715-c08e-4a99-a6ba-4b4837f47aaf\") " pod="kube-system/storage-provisioner"
	Nov 23 09:00:58 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:58.174501    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-r5snd" podStartSLOduration=43.174480426 podStartE2EDuration="43.174480426s" podCreationTimestamp="2025-11-23 09:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:58.151526592 +0000 UTC m=+47.764892662" watchObservedRunningTime="2025-11-23 09:00:58.174480426 +0000 UTC m=+47.787846480"
	Nov 23 09:00:58 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:00:58.194280    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=40.194260207 podStartE2EDuration="40.194260207s" podCreationTimestamp="2025-11-23 09:00:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:58.175382341 +0000 UTC m=+47.788748493" watchObservedRunningTime="2025-11-23 09:00:58.194260207 +0000 UTC m=+47.807626261"
	Nov 23 09:01:00 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:01:00.785585    1469 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtmdg\" (UniqueName: \"kubernetes.io/projected/5c2314ab-27c6-4441-889f-af501dd53560-kube-api-access-wtmdg\") pod \"busybox\" (UID: \"5c2314ab-27c6-4441-889f-af501dd53560\") " pod="default/busybox"
	Nov 23 09:01:04 default-k8s-diff-port-118762 kubelet[1469]: I1123 09:01:04.239782    1469 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.995372546 podStartE2EDuration="4.23976421s" podCreationTimestamp="2025-11-23 09:01:00 +0000 UTC" firstStartedPulling="2025-11-23 09:01:01.143978392 +0000 UTC m=+50.757344446" lastFinishedPulling="2025-11-23 09:01:03.388370056 +0000 UTC m=+53.001736110" observedRunningTime="2025-11-23 09:01:04.239259656 +0000 UTC m=+53.852625710" watchObservedRunningTime="2025-11-23 09:01:04.23976421 +0000 UTC m=+53.853130263"
	
	
	==> storage-provisioner [0ba40194109791d104f78d8c49fce8f17476a8f2eefb62ffbe6dfb2839e696cd] <==
	I1123 09:00:58.098919       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:00:58.101999       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:58.111039       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:00:58.111418       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:00:58.111953       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa67e961-0088-43e8-a322-4cd46a51ea66", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-118762_a5d34672-e260-47df-a56c-b960d50ac6cd became leader
	I1123 09:00:58.112166       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-118762_a5d34672-e260-47df-a56c-b960d50ac6cd!
	W1123 09:00:58.121147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:00:58.128516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:00:58.214076       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-118762_a5d34672-e260-47df-a56c-b960d50ac6cd!
	W1123 09:01:00.281323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:00.357953       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:02.361254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:02.369208       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:04.372260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:04.376993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:06.380476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:06.390697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:08.394222       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:08.399004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:10.402776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:10.415195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:12.418896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:12.424565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:14.427749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:14.434508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-118762 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (15.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-672503 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b98565e7-4d04-4d9a-b95e-186c353129dc] Pending
helpers_test.go:352: "busybox" [b98565e7-4d04-4d9a-b95e-186c353129dc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b98565e7-4d04-4d9a-b95e-186c353129dc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.003338316s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-672503 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-672503
helpers_test.go:243: (dbg) docker inspect embed-certs-672503:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a",
	        "Created": "2025-11-23T08:59:46.1804136Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 217101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:59:46.242545258Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a/hostname",
	        "HostsPath": "/var/lib/docker/containers/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a/hosts",
	        "LogPath": "/var/lib/docker/containers/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a-json.log",
	        "Name": "/embed-certs-672503",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-672503:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-672503",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a",
	                "LowerDir": "/var/lib/docker/overlay2/d9e813299ad3056c7933101be61b4b41ca4cfef00363799af7d026e628e5e44c-init/diff:/var/lib/docker/overlay2/e1de88c117c0c773e1fa636243190fd97eadaa5a8e1ee08fd53827cbac767d35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9e813299ad3056c7933101be61b4b41ca4cfef00363799af7d026e628e5e44c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9e813299ad3056c7933101be61b4b41ca4cfef00363799af7d026e628e5e44c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9e813299ad3056c7933101be61b4b41ca4cfef00363799af7d026e628e5e44c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-672503",
	                "Source": "/var/lib/docker/volumes/embed-certs-672503/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-672503",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-672503",
	                "name.minikube.sigs.k8s.io": "embed-certs-672503",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efae4430f14a822cd937193977eb629d5980941044bed0c01d3489be3d3dd295",
	            "SandboxKey": "/var/run/docker/netns/efae4430f14a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-672503": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:27:92:d2:91:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f1c865c13f589ba7deeafc84c206cf7e759a774dbe5f964667b108e41ea38191",
	                    "EndpointID": "862bc3ada9b10aa54a8f695ed9bac3aea632e7a3002849c5c6b6714677787b6e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-672503",
	                        "3da4c63ab75a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-672503 -n embed-certs-672503
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-672503 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-672503 logs -n 25: (1.20175672s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p force-systemd-env-023309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p kubernetes-upgrade-291582                                                                                                                                                                                                                        │ kubernetes-upgrade-291582    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ force-systemd-env-023309 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p force-systemd-env-023309                                                                                                                                                                                                                         │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-options-886452 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ cert-options-886452 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ -p cert-options-886452 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-options-886452                                                                                                                                                                                                                              │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-132097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ stop    │ -p old-k8s-version-132097 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-132097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ old-k8s-version-132097 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p old-k8s-version-132097 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ unpause │ -p old-k8s-version-132097 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p old-k8s-version-132097                                                                                                                                                                                                                           │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p old-k8s-version-132097                                                                                                                                                                                                                           │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p default-k8s-diff-port-118762 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p cert-expiration-918102                                                                                                                                                                                                                           │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-118762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ stop    │ -p default-k8s-diff-port-118762 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:59:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:59:40.577485  216074 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:59:40.577691  216074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:40.577718  216074 out.go:374] Setting ErrFile to fd 2...
	I1123 08:59:40.577739  216074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:40.578089  216074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:59:40.578573  216074 out.go:368] Setting JSON to false
	I1123 08:59:40.579525  216074 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6133,"bootTime":1763882248,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:59:40.579625  216074 start.go:143] virtualization:  
	I1123 08:59:40.583259  216074 out.go:179] * [embed-certs-672503] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:59:40.587830  216074 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:59:40.587967  216074 notify.go:221] Checking for updates...
	I1123 08:59:40.594558  216074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:59:40.597788  216074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:59:40.601027  216074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:59:40.604233  216074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:59:40.607539  216074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:59:40.611140  216074 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:40.611247  216074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:59:40.656282  216074 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:59:40.656413  216074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:40.752458  216074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:59:40.738300735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:40.752566  216074 docker.go:319] overlay module found
	I1123 08:59:40.756622  216074 out.go:179] * Using the docker driver based on user configuration
	I1123 08:59:40.759788  216074 start.go:309] selected driver: docker
	I1123 08:59:40.759810  216074 start.go:927] validating driver "docker" against <nil>
	I1123 08:59:40.759823  216074 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:59:40.760559  216074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:40.840879  216074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-23 08:59:40.831791559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:40.841036  216074 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:59:40.841265  216074 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:59:40.844487  216074 out.go:179] * Using Docker driver with root privileges
	I1123 08:59:40.847551  216074 cni.go:84] Creating CNI manager for ""
	I1123 08:59:40.847624  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:40.847640  216074 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:59:40.847726  216074 start.go:353] cluster config:
	{Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:40.850947  216074 out.go:179] * Starting "embed-certs-672503" primary control-plane node in "embed-certs-672503" cluster
	I1123 08:59:40.853960  216074 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:59:40.856924  216074 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:59:40.859875  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:40.859924  216074 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:59:40.859933  216074 cache.go:65] Caching tarball of preloaded images
	I1123 08:59:40.859968  216074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:59:40.860013  216074 preload.go:238] Found /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:59:40.860024  216074 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:59:40.860143  216074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json ...
	I1123 08:59:40.860163  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json: {Name:mkb81d39d58a71dac5e98d24c241cff9b78e273e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:40.879736  216074 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:59:40.879759  216074 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:59:40.879779  216074 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:59:40.879808  216074 start.go:360] acquireMachinesLock for embed-certs-672503: {Name:mk52b3d46d7a43264b4677c9fc6abfc0706853fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:59:40.879915  216074 start.go:364] duration metric: took 86.869µs to acquireMachinesLock for "embed-certs-672503"
	I1123 08:59:40.879944  216074 start.go:93] Provisioning new machine with config: &{Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:59:40.880019  216074 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:59:39.039954  214550 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-118762:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.007752645s)
	I1123 08:59:39.039991  214550 kic.go:203] duration metric: took 5.007913738s to extract preloaded images to volume ...
	W1123 08:59:39.040149  214550 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:39.040271  214550 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:39.103132  214550 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-118762 --name default-k8s-diff-port-118762 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-118762 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-118762 --network default-k8s-diff-port-118762 --ip 192.168.85.2 --volume default-k8s-diff-port-118762:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:39.606571  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Running}}
	I1123 08:59:39.652908  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:39.675600  214550 cli_runner.go:164] Run: docker exec default-k8s-diff-port-118762 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:39.805153  214550 oci.go:144] the created container "default-k8s-diff-port-118762" has a running status.
	I1123 08:59:39.805181  214550 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa...
	I1123 08:59:40.603002  214550 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:40.646836  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:40.670926  214550 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:40.670945  214550 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-118762 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:40.744487  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:40.770445  214550 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:40.770539  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:40.791316  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:40.791758  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:40.791772  214550 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:40.792437  214550 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51880->127.0.0.1:33064: read: connection reset by peer
	I1123 08:59:40.883578  216074 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:59:40.883819  216074 start.go:159] libmachine.API.Create for "embed-certs-672503" (driver="docker")
	I1123 08:59:40.883864  216074 client.go:173] LocalClient.Create starting
	I1123 08:59:40.883946  216074 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem
	I1123 08:59:40.883982  216074 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:40.884002  216074 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:40.884067  216074 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem
	I1123 08:59:40.884090  216074 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:40.884109  216074 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:40.884452  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:59:40.900264  216074 cli_runner.go:211] docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:59:40.900362  216074 network_create.go:284] running [docker network inspect embed-certs-672503] to gather additional debugging logs...
	I1123 08:59:40.900388  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503
	W1123 08:59:40.916918  216074 cli_runner.go:211] docker network inspect embed-certs-672503 returned with exit code 1
	I1123 08:59:40.916950  216074 network_create.go:287] error running [docker network inspect embed-certs-672503]: docker network inspect embed-certs-672503: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-672503 not found
	I1123 08:59:40.916965  216074 network_create.go:289] output of [docker network inspect embed-certs-672503]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-672503 not found
	
	** /stderr **
	I1123 08:59:40.917065  216074 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:40.933652  216074 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ab12b2c3b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:c9:6d:7b:80:76} reservation:<nil>}
	I1123 08:59:40.933989  216074 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7f5e4a52a57c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:1a:79:b2:02:66} reservation:<nil>}
	I1123 08:59:40.934307  216074 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed031858d624 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:47:7d:04:56:4a} reservation:<nil>}
	I1123 08:59:40.934717  216074 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7270}
	I1123 08:59:40.934741  216074 network_create.go:124] attempt to create docker network embed-certs-672503 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:59:40.934796  216074 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-672503 embed-certs-672503
	I1123 08:59:40.992310  216074 network_create.go:108] docker network embed-certs-672503 192.168.76.0/24 created
	I1123 08:59:40.992345  216074 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-672503" container
	I1123 08:59:40.992424  216074 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:59:41.010086  216074 cli_runner.go:164] Run: docker volume create embed-certs-672503 --label name.minikube.sigs.k8s.io=embed-certs-672503 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:59:41.028903  216074 oci.go:103] Successfully created a docker volume embed-certs-672503
	I1123 08:59:41.029006  216074 cli_runner.go:164] Run: docker run --rm --name embed-certs-672503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-672503 --entrypoint /usr/bin/test -v embed-certs-672503:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:59:41.597394  216074 oci.go:107] Successfully prepared a docker volume embed-certs-672503
	I1123 08:59:41.597456  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:41.597467  216074 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:59:41.597532  216074 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-672503:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:59:43.963549  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-118762
	
	I1123 08:59:43.963629  214550 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-118762"
	I1123 08:59:43.963730  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:43.982067  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:43.982376  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:43.982388  214550 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-118762 && echo "default-k8s-diff-port-118762" | sudo tee /etc/hostname
	I1123 08:59:44.162438  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-118762
	
	I1123 08:59:44.162524  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.184402  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:44.184717  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:44.184743  214550 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-118762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-118762/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-118762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:44.387688  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:44.387725  214550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:59:44.387751  214550 ubuntu.go:190] setting up certificates
	I1123 08:59:44.387761  214550 provision.go:84] configureAuth start
	I1123 08:59:44.387823  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.406977  214550 provision.go:143] copyHostCerts
	I1123 08:59:44.407043  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:59:44.407056  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:59:44.407135  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:59:44.407247  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:59:44.407259  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:59:44.407287  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:59:44.407420  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:59:44.407449  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:59:44.407501  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:59:44.407571  214550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-118762 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-118762 localhost minikube]
	I1123 08:59:44.485276  214550 provision.go:177] copyRemoteCerts
	I1123 08:59:44.485399  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:44.485475  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.502836  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.611676  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 08:59:44.631601  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:59:44.649182  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:59:44.666321  214550 provision.go:87] duration metric: took 278.533612ms to configureAuth
	I1123 08:59:44.666344  214550 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:44.666518  214550 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:44.666526  214550 machine.go:97] duration metric: took 3.896062717s to provisionDockerMachine
	I1123 08:59:44.666532  214550 client.go:176] duration metric: took 11.505696925s to LocalClient.Create
	I1123 08:59:44.666546  214550 start.go:167] duration metric: took 11.505763117s to libmachine.API.Create "default-k8s-diff-port-118762"
	I1123 08:59:44.666552  214550 start.go:293] postStartSetup for "default-k8s-diff-port-118762" (driver="docker")
	I1123 08:59:44.666561  214550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:44.666612  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:44.666651  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.683801  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.791506  214550 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:44.795326  214550 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:44.795375  214550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:44.795403  214550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:59:44.795479  214550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:59:44.795605  214550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:59:44.795716  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:44.804406  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:44.824224  214550 start.go:296] duration metric: took 157.657779ms for postStartSetup
	I1123 08:59:44.824627  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.842791  214550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/config.json ...
	I1123 08:59:44.845272  214550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:44.845334  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.870817  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.973574  214550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:44.978835  214550 start.go:128] duration metric: took 11.821803269s to createHost
	I1123 08:59:44.978859  214550 start.go:83] releasing machines lock for "default-k8s-diff-port-118762", held for 11.821970245s
	I1123 08:59:44.978934  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.996375  214550 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:44.996410  214550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:44.996429  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.997293  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:45.019323  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:45.019748  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:45.266005  214550 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:45.276798  214550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:45.286312  214550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:45.286509  214550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:45.400996  214550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:45.401066  214550 start.go:496] detecting cgroup driver to use...
	I1123 08:59:45.401106  214550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:45.401166  214550 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:59:45.416740  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:59:45.430174  214550 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:45.430277  214550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:45.449266  214550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:45.468575  214550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:45.593366  214550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:45.727407  214550 docker.go:234] disabling docker service ...
	I1123 08:59:45.727524  214550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:45.750566  214550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:45.763685  214550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:45.882473  214550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:46.015128  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:46.029863  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:46.051000  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:59:46.067292  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:59:46.081288  214550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:59:46.081404  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:59:46.100139  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:46.120619  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:59:46.133469  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:46.142574  214550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:46.152921  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:59:46.164064  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:59:46.173191  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:59:46.188341  214550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:46.201637  214550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:46.214012  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:46.386854  214550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:59:46.574017  214550 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:59:46.574082  214550 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:59:46.590863  214550 start.go:564] Will wait 60s for crictl version
	I1123 08:59:46.590924  214550 ssh_runner.go:195] Run: which crictl
	I1123 08:59:46.596219  214550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:46.641889  214550 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:59:46.641953  214550 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:46.715861  214550 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:46.799546  214550 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:59:46.802513  214550 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-118762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:46.830038  214550 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:46.834203  214550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:46.850678  214550 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-118762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:46.850809  214550 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:46.850885  214550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:46.899220  214550 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:46.899242  214550 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:59:46.899304  214550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:46.940637  214550 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:46.940658  214550 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:46.940666  214550 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 08:59:46.940760  214550 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-118762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:46.941123  214550 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:59:47.001942  214550 cni.go:84] Creating CNI manager for ""
	I1123 08:59:47.001962  214550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:47.001977  214550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:59:47.002000  214550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-118762 NodeName:default-k8s-diff-port-118762 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:47.002115  214550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-118762"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:47.002179  214550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:47.020644  214550 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:47.020704  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:47.037002  214550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 08:59:47.055802  214550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:47.079429  214550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1123 08:59:47.092521  214550 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:47.096917  214550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:47.106392  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:47.305463  214550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:47.337722  214550 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762 for IP: 192.168.85.2
	I1123 08:59:47.337739  214550 certs.go:195] generating shared ca certs ...
	I1123 08:59:47.337754  214550 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.337885  214550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:59:47.337928  214550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:59:47.337936  214550 certs.go:257] generating profile certs ...
	I1123 08:59:47.337988  214550 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key
	I1123 08:59:47.337997  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt with IP's: []
	I1123 08:59:47.952908  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt ...
	I1123 08:59:47.952991  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt: {Name:mkf95cd7f0813a939fc5a10b868018298b21adb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.953216  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key ...
	I1123 08:59:47.953254  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key: {Name:mkf9a2acc2c42bd0a0cf1a1f2787b6cd46ba4f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.953415  214550 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca
	I1123 08:59:47.953453  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:59:48.203697  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca ...
	I1123 08:59:48.203769  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca: {Name:mk05909547f3239afc9409b846b3fb486118a441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.203987  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca ...
	I1123 08:59:48.204023  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca: {Name:mkec035b62be2e775b2f0c85ff409f77aebf0a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.204156  214550 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt
	I1123 08:59:48.204271  214550 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key
	I1123 08:59:48.204380  214550 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key
	I1123 08:59:48.204418  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt with IP's: []
	I1123 08:59:48.359177  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt ...
	I1123 08:59:48.359211  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt: {Name:mkf91279fb6f4fe072e258fdea87868d2840f420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.359412  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key ...
	I1123 08:59:48.359429  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key: {Name:mkbf74023435808035706f9a2ad6638168a8a889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.359663  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:59:48.359708  214550 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:48.359723  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:48.359753  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:59:48.359783  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:48.359810  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:59:48.359858  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:48.360416  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:48.379912  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:59:48.398946  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:48.417150  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:59:48.434559  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 08:59:48.452066  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:59:48.470350  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:48.488326  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:59:48.506336  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:59:48.524422  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:59:48.541642  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:48.559509  214550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:48.572933  214550 ssh_runner.go:195] Run: openssl version
	I1123 08:59:48.579412  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:59:48.588035  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.591879  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.591946  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.633205  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:48.641796  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:48.650209  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.654132  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.654249  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.695982  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:48.704319  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:59:48.712849  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.716712  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.716781  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.757938  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:48.766377  214550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:48.769975  214550 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:48.770030  214550 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-118762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:48.770114  214550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:48.770174  214550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:48.795754  214550 cri.go:89] found id: ""
	I1123 08:59:48.795881  214550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:48.803757  214550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:48.811647  214550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:48.811743  214550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:48.819712  214550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:48.819733  214550 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:48.819805  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 08:59:48.827458  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:48.827560  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:48.835278  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 08:59:48.843241  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:48.843395  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:48.850790  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 08:59:48.859021  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:48.859145  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:48.866723  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 08:59:48.874202  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:48.874315  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:48.882081  214550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:48.932250  214550 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:48.932626  214550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:48.968464  214550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:48.968571  214550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:48.968634  214550 kubeadm.go:319] OS: Linux
	I1123 08:59:48.968710  214550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:48.968779  214550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:48.968852  214550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:48.968949  214550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:48.969029  214550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:48.969104  214550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:48.969191  214550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:48.969263  214550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:48.969334  214550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:49.039395  214550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:49.039547  214550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:49.039694  214550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:49.045139  214550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:46.061340  216074 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-672503:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.463759827s)
	I1123 08:59:46.061369  216074 kic.go:203] duration metric: took 4.463899193s to extract preloaded images to volume ...
	W1123 08:59:46.061515  216074 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:46.061700  216074 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:46.159063  216074 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-672503 --name embed-certs-672503 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-672503 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-672503 --network embed-certs-672503 --ip 192.168.76.2 --volume embed-certs-672503:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:46.530738  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Running}}
	I1123 08:59:46.558782  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:46.582800  216074 cli_runner.go:164] Run: docker exec embed-certs-672503 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:46.646806  216074 oci.go:144] the created container "embed-certs-672503" has a running status.
	I1123 08:59:46.646847  216074 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa...
	I1123 08:59:46.847783  216074 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:46.880288  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:46.917106  216074 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:46.917131  216074 kic_runner.go:114] Args: [docker exec --privileged embed-certs-672503 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:46.987070  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:47.019780  216074 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:47.019874  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:47.051570  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:47.051918  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:47.051935  216074 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:47.052575  216074 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:59:50.211545  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-672503
	
	I1123 08:59:50.211595  216074 ubuntu.go:182] provisioning hostname "embed-certs-672503"
	I1123 08:59:50.211673  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:50.237002  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:50.237319  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:50.237337  216074 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-672503 && echo "embed-certs-672503" | sudo tee /etc/hostname
	I1123 08:59:50.436539  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-672503
	
	I1123 08:59:50.436687  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:50.465709  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:50.466029  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:50.466045  216074 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672503' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672503/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672503' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:49.051452  214550 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:49.051585  214550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:49.051703  214550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:50.049674  214550 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:50.094855  214550 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:50.781521  214550 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:51.007002  214550 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:51.586516  214550 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:51.587407  214550 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-118762 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:52.294730  214550 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:52.295126  214550 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-118762 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:50.619868  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:50.619905  216074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:59:50.619926  216074 ubuntu.go:190] setting up certificates
	I1123 08:59:50.619937  216074 provision.go:84] configureAuth start
	I1123 08:59:50.620004  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:50.645393  216074 provision.go:143] copyHostCerts
	I1123 08:59:50.645466  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:59:50.645475  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:59:50.645553  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:59:50.645639  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:59:50.645644  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:59:50.645669  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:59:50.645724  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:59:50.645729  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:59:50.645751  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:59:50.645795  216074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672503 san=[127.0.0.1 192.168.76.2 embed-certs-672503 localhost minikube]
	I1123 08:59:51.127888  216074 provision.go:177] copyRemoteCerts
	I1123 08:59:51.127960  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:51.128004  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.153368  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.284623  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:59:51.314621  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:59:51.335720  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:59:51.355451  216074 provision.go:87] duration metric: took 735.481705ms to configureAuth
	I1123 08:59:51.355533  216074 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:51.355763  216074 config.go:182] Loaded profile config "embed-certs-672503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:51.355791  216074 machine.go:97] duration metric: took 4.335986452s to provisionDockerMachine
	I1123 08:59:51.355815  216074 client.go:176] duration metric: took 10.471938723s to LocalClient.Create
	I1123 08:59:51.355856  216074 start.go:167] duration metric: took 10.472037333s to libmachine.API.Create "embed-certs-672503"
	I1123 08:59:51.355949  216074 start.go:293] postStartSetup for "embed-certs-672503" (driver="docker")
	I1123 08:59:51.355976  216074 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:51.356061  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:51.356134  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.375632  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.492356  216074 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:51.496551  216074 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:51.496580  216074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:51.496592  216074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:59:51.496645  216074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:59:51.496721  216074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:59:51.496826  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:51.505195  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:51.525735  216074 start.go:296] duration metric: took 169.754775ms for postStartSetup
	I1123 08:59:51.526206  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:51.546243  216074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json ...
	I1123 08:59:51.546511  216074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:51.546553  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.568894  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.680931  216074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:51.686143  216074 start.go:128] duration metric: took 10.806110424s to createHost
	I1123 08:59:51.686171  216074 start.go:83] releasing machines lock for "embed-certs-672503", held for 10.806242996s
	I1123 08:59:51.686257  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:51.705486  216074 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:51.705573  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.705949  216074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:51.706024  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.760593  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.767588  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.883448  216074 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:51.991493  216074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:51.996626  216074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:51.996703  216074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:52.044663  216074 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:52.044689  216074 start.go:496] detecting cgroup driver to use...
	I1123 08:59:52.044721  216074 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:52.044781  216074 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:59:52.061494  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:59:52.076189  216074 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:52.076260  216074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:52.094291  216074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:52.114994  216074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:52.292895  216074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:52.481817  216074 docker.go:234] disabling docker service ...
	I1123 08:59:52.481931  216074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:52.508317  216074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:52.526364  216074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:52.700213  216074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:52.897094  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:52.915331  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:52.931211  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:59:52.946225  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:59:52.956101  216074 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:59:52.956226  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:59:52.965762  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:52.975341  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:59:52.985192  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:52.994955  216074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:53.010410  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:59:53.027207  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:59:53.042077  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:59:53.054424  216074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:53.063874  216074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:53.072557  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.226737  216074 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:59:53.443692  216074 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:59:53.443892  216074 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:59:53.448833  216074 start.go:564] Will wait 60s for crictl version
	I1123 08:59:53.448947  216074 ssh_runner.go:195] Run: which crictl
	I1123 08:59:53.453157  216074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:53.486128  216074 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:59:53.486258  216074 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:53.513131  216074 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:53.540090  216074 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:59:53.543140  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:53.564398  216074 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:53.569921  216074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:53.584791  216074 kubeadm.go:884] updating cluster {Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:53.584953  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:53.585060  216074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:53.625666  216074 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:53.625695  216074 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:59:53.625759  216074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:53.653757  216074 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:53.653781  216074 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:53.653789  216074 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 08:59:53.653881  216074 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-672503 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:53.653948  216074 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:59:53.696072  216074 cni.go:84] Creating CNI manager for ""
	I1123 08:59:53.696098  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:53.696113  216074 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:59:53.696140  216074 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672503 NodeName:embed-certs-672503 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:53.696260  216074 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-672503"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:53.696337  216074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:53.705716  216074 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:53.705795  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:53.718287  216074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 08:59:53.737046  216074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:53.760149  216074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1123 08:59:53.778487  216074 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:53.782565  216074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:53.792649  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.947067  216074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:53.969434  216074 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503 for IP: 192.168.76.2
	I1123 08:59:53.969452  216074 certs.go:195] generating shared ca certs ...
	I1123 08:59:53.969468  216074 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:53.969604  216074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:59:53.969644  216074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:59:53.969650  216074 certs.go:257] generating profile certs ...
	I1123 08:59:53.969704  216074 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key
	I1123 08:59:53.969718  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt with IP's: []
	I1123 08:59:54.209900  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt ...
	I1123 08:59:54.209965  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt: {Name:mk5c525ca71ddd2fe2c7f6b3ca8599f23905a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.210184  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key ...
	I1123 08:59:54.210197  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key: {Name:mk8943be44317db4dff6c1e7eaf6a19a57aa6c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.210284  216074 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae
	I1123 08:59:54.210296  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:59:54.801069  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae ...
	I1123 08:59:54.801096  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae: {Name:mk380799870e5ea7b7c67a4d865af58b1de5aef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.801278  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae ...
	I1123 08:59:54.801290  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae: {Name:mk102df1c6315a508518783bccf3cb2f81c38779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.801364  216074 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt
	I1123 08:59:54.801439  216074 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key
	I1123 08:59:54.801491  216074 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key
	I1123 08:59:54.801507  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt with IP's: []
	I1123 08:59:55.253694  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt ...
	I1123 08:59:55.253767  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt: {Name:mkdf06b6c921783e84858386a11a6aa335d63967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:55.253999  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key ...
	I1123 08:59:55.254013  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key: {Name:mk979f2bcf5527fe8ab1fb441ce8c10881831a69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:55.254199  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:59:55.254240  216074 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:55.254249  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:55.254277  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:59:55.254303  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:55.254368  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:59:55.254413  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:55.255001  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:55.275757  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:59:55.301850  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:55.327043  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:59:55.356120  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:59:55.379337  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:59:55.403251  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:55.432903  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:59:55.452955  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:55.477346  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:59:55.510351  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:59:55.531366  216074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:55.546185  216074 ssh_runner.go:195] Run: openssl version
	I1123 08:59:55.552895  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:59:55.562322  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.566546  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.566661  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.608819  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:55.617792  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:59:55.626621  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.631031  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.631147  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.673213  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:55.682467  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:55.691629  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.696005  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.696116  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.737391  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:55.746485  216074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:55.750669  216074 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:55.750779  216074 kubeadm.go:401] StartCluster: {Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:55.750882  216074 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:55.750971  216074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:55.781886  216074 cri.go:89] found id: ""
	I1123 08:59:55.782008  216074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:55.792128  216074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:55.801015  216074 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:55.801120  216074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:55.811498  216074 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:55.811567  216074 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:55.811651  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:59:55.820390  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:55.820489  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:55.828204  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:59:55.837261  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:55.837355  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:55.845286  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:59:55.854064  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:55.854174  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:55.861833  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:59:55.870496  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:55.870610  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:55.878638  216074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:55.935971  216074 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:55.937587  216074 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:56.004559  216074 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:56.004761  216074 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:56.004834  216074 kubeadm.go:319] OS: Linux
	I1123 08:59:56.004912  216074 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:56.004998  216074 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:56.005083  216074 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:56.005163  216074 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:56.005244  216074 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:56.005326  216074 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:56.005405  216074 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:56.005488  216074 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:56.005568  216074 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:56.119904  216074 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:56.120070  216074 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:56.120207  216074 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:56.130630  216074 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:54.179851  214550 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:59:55.466764  214550 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:55.672141  214550 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:59:55.672731  214550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:59:55.836881  214550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:59:56.018357  214550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:59:56.361926  214550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:59:56.873997  214550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:59:57.413691  214550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:59:57.414774  214550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:59:57.417706  214550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:59:57.421342  214550 out.go:252]   - Booting up control plane ...
	I1123 08:59:57.421437  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:59:57.426176  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:59:57.426253  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:59:57.445605  214550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:59:57.445714  214550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:59:57.456012  214550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:59:57.456111  214550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:59:57.456152  214550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:59:57.617060  214550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:59:57.617179  214550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:59:56.136350  216074 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:56.136541  216074 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:56.136667  216074 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:57.121922  216074 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:57.436901  216074 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:57.609063  216074 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:58.013484  216074 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:58.298959  216074 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:58.303729  216074 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-672503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:58.349481  216074 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:58.350030  216074 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-672503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:59.325836  216074 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:00:00.299809  216074 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:59.119693  214550 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500938234s
	I1123 08:59:59.122603  214550 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:59:59.122949  214550 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1123 08:59:59.123601  214550 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:59:59.124077  214550 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:00:00.879718  216074 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:00:00.879799  216074 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:00:01.122151  216074 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:00:03.397018  216074 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:00:05.387724  216074 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:00:05.691737  216074 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:00:06.099799  216074 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:00:06.099904  216074 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:00:06.107751  216074 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:00:03.716327  214550 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.591863015s
	I1123 09:00:09.442146  214550 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.317417042s
	I1123 09:00:09.630647  214550 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.507233792s
	I1123 09:00:09.661041  214550 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:09.696775  214550 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:09.724658  214550 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:09.725105  214550 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-118762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:09.789313  214550 kubeadm.go:319] [bootstrap-token] Using token: d97ou5.m8drvm11cz5qqhuf
	I1123 09:00:06.111147  216074 out.go:252]   - Booting up control plane ...
	I1123 09:00:06.111260  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:00:06.111338  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:00:06.111425  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:00:06.141906  216074 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:00:06.142016  216074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:00:06.152623  216074 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:00:06.152727  216074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:00:06.152767  216074 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:00:06.424623  216074 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:00:06.424743  216074 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:00:07.419394  216074 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001849125s
	I1123 09:00:07.422769  216074 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:00:07.422861  216074 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 09:00:07.423174  216074 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:00:07.423260  216074 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:00:09.792446  214550 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:09.792565  214550 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:09.822919  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:09.841947  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:09.852584  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:09.860084  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:09.867079  214550 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:10.041393  214550 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:10.492226  214550 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:11.049466  214550 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:11.050970  214550 kubeadm.go:319] 
	I1123 09:00:11.051044  214550 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:11.051049  214550 kubeadm.go:319] 
	I1123 09:00:11.051126  214550 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:11.051130  214550 kubeadm.go:319] 
	I1123 09:00:11.051155  214550 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:11.054107  214550 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:11.054173  214550 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:11.054178  214550 kubeadm.go:319] 
	I1123 09:00:11.054232  214550 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:11.054259  214550 kubeadm.go:319] 
	I1123 09:00:11.054308  214550 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:11.054312  214550 kubeadm.go:319] 
	I1123 09:00:11.054364  214550 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:11.054439  214550 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:11.054508  214550 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:11.054514  214550 kubeadm.go:319] 
	I1123 09:00:11.054918  214550 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:11.054999  214550 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:11.055003  214550 kubeadm.go:319] 
	I1123 09:00:11.055310  214550 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token d97ou5.m8drvm11cz5qqhuf \
	I1123 09:00:11.055433  214550 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 09:00:11.055653  214550 kubeadm.go:319] 	--control-plane 
	I1123 09:00:11.055662  214550 kubeadm.go:319] 
	I1123 09:00:11.056081  214550 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:11.056091  214550 kubeadm.go:319] 
	I1123 09:00:11.056374  214550 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token d97ou5.m8drvm11cz5qqhuf \
	I1123 09:00:11.056668  214550 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 09:00:11.065038  214550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:11.065464  214550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:11.065590  214550 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:11.065601  214550 cni.go:84] Creating CNI manager for ""
	I1123 09:00:11.065609  214550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:00:11.068935  214550 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:11.071817  214550 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:11.083987  214550 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:11.084065  214550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:11.157462  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:00:11.877723  214550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:00:11.877851  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:11.877919  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-118762 minikube.k8s.io/updated_at=2025_11_23T09_00_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=default-k8s-diff-port-118762 minikube.k8s.io/primary=true
	I1123 09:00:12.400645  214550 ops.go:34] apiserver oom_adj: -16
	I1123 09:00:12.400749  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.479703  216074 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.056359214s
	I1123 09:00:12.901058  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:13.400921  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:13.901348  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.400890  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.901622  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:15.401708  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:15.797055  214550 kubeadm.go:1114] duration metric: took 3.919248598s to wait for elevateKubeSystemPrivileges
	I1123 09:00:15.797081  214550 kubeadm.go:403] duration metric: took 27.027055323s to StartCluster
	I1123 09:00:15.797098  214550 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:15.797159  214550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:00:15.797780  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:15.797984  214550 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:00:15.798066  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:00:15.798303  214550 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:00:15.798340  214550 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:00:15.798395  214550 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-118762"
	I1123 09:00:15.798414  214550 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-118762"
	I1123 09:00:15.798437  214550 host.go:66] Checking if "default-k8s-diff-port-118762" exists ...
	I1123 09:00:15.798912  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.799494  214550 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-118762"
	I1123 09:00:15.799518  214550 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-118762"
	I1123 09:00:15.799812  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.802617  214550 out.go:179] * Verifying Kubernetes components...
	I1123 09:00:15.805826  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:15.840681  214550 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-118762"
	I1123 09:00:15.840730  214550 host.go:66] Checking if "default-k8s-diff-port-118762" exists ...
	I1123 09:00:15.841178  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.841365  214550 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:00:15.845719  214550 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:15.845739  214550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:00:15.845799  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 09:00:15.885107  214550 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:15.885129  214550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:00:15.885196  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 09:00:15.885424  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 09:00:15.922980  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 09:00:16.516094  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:00:16.516301  214550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:16.565568  214550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:16.660294  214550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:17.770086  214550 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.253733356s)
	I1123 09:00:17.770803  214550 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-118762" to be "Ready" ...
	I1123 09:00:17.771113  214550 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.254946263s)
	I1123 09:00:17.771140  214550 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 09:00:18.288784  214550 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-118762" context rescaled to 1 replicas
	I1123 09:00:18.294378  214550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.634044217s)
	I1123 09:00:18.294508  214550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.728864491s)
	I1123 09:00:18.313019  214550 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 09:00:18.174934  216074 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.752142419s
	I1123 09:00:18.924553  216074 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.501560337s
	I1123 09:00:18.944911  216074 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:18.969340  216074 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:18.982694  216074 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:18.982935  216074 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-672503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:18.996135  216074 kubeadm.go:319] [bootstrap-token] Using token: n9250s.xdwmypsz1r225um6
	I1123 09:00:18.999202  216074 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:18.999323  216074 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:19.010682  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:19.023889  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:19.027010  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:19.034948  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:19.039786  216074 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:19.331973  216074 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:19.770619  216074 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:20.331084  216074 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:20.332385  216074 kubeadm.go:319] 
	I1123 09:00:20.332460  216074 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:20.332472  216074 kubeadm.go:319] 
	I1123 09:00:20.332550  216074 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:20.332554  216074 kubeadm.go:319] 
	I1123 09:00:20.332585  216074 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:20.332649  216074 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:20.332706  216074 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:20.332714  216074 kubeadm.go:319] 
	I1123 09:00:20.332768  216074 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:20.332775  216074 kubeadm.go:319] 
	I1123 09:00:20.332826  216074 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:20.332834  216074 kubeadm.go:319] 
	I1123 09:00:20.332886  216074 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:20.332964  216074 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:20.333036  216074 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:20.333044  216074 kubeadm.go:319] 
	I1123 09:00:20.333141  216074 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:20.333222  216074 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:20.333230  216074 kubeadm.go:319] 
	I1123 09:00:20.333314  216074 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n9250s.xdwmypsz1r225um6 \
	I1123 09:00:20.333421  216074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 09:00:20.333454  216074 kubeadm.go:319] 	--control-plane 
	I1123 09:00:20.333461  216074 kubeadm.go:319] 
	I1123 09:00:20.333554  216074 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:20.333574  216074 kubeadm.go:319] 
	I1123 09:00:20.333657  216074 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n9250s.xdwmypsz1r225um6 \
	I1123 09:00:20.333764  216074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 09:00:20.339187  216074 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:20.339460  216074 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:20.339572  216074 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:20.339594  216074 cni.go:84] Creating CNI manager for ""
	I1123 09:00:20.339606  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:00:20.342914  216074 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:20.345744  216074 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:20.350352  216074 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:20.350371  216074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:20.365062  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:00:18.315850  214550 addons.go:530] duration metric: took 2.517504837s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 09:00:19.773873  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:21.774051  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:20.682862  216074 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:00:20.683008  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:20.683107  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-672503 minikube.k8s.io/updated_at=2025_11_23T09_00_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=embed-certs-672503 minikube.k8s.io/primary=true
	I1123 09:00:20.861424  216074 ops.go:34] apiserver oom_adj: -16
	I1123 09:00:20.881440  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:21.382484  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:21.881564  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:22.381797  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:22.881698  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:23.382044  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:23.881478  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:24.381553  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:24.882135  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:25.085445  216074 kubeadm.go:1114] duration metric: took 4.402483472s to wait for elevateKubeSystemPrivileges
	I1123 09:00:25.085479  216074 kubeadm.go:403] duration metric: took 29.334704925s to StartCluster
	I1123 09:00:25.085499  216074 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:25.085586  216074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:00:25.087626  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:25.087936  216074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:00:25.088691  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:00:25.089017  216074 config.go:182] Loaded profile config "embed-certs-672503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:00:25.089061  216074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:00:25.089133  216074 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-672503"
	I1123 09:00:25.089153  216074 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-672503"
	I1123 09:00:25.089179  216074 host.go:66] Checking if "embed-certs-672503" exists ...
	I1123 09:00:25.089653  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.090352  216074 addons.go:70] Setting default-storageclass=true in profile "embed-certs-672503"
	I1123 09:00:25.090381  216074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672503"
	I1123 09:00:25.090715  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.093412  216074 out.go:179] * Verifying Kubernetes components...
	I1123 09:00:25.100650  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:25.132922  216074 addons.go:239] Setting addon default-storageclass=true in "embed-certs-672503"
	I1123 09:00:25.132970  216074 host.go:66] Checking if "embed-certs-672503" exists ...
	I1123 09:00:25.133464  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.134451  216074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:00:25.137634  216074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:25.137660  216074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:00:25.137734  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 09:00:25.175531  216074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:25.175555  216074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:00:25.175631  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 09:00:25.190357  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 09:00:25.214325  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 09:00:25.395679  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:00:25.445659  216074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:25.568912  216074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:25.606764  216074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:26.047827  216074 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 09:00:26.050542  216074 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672503" to be "Ready" ...
	I1123 09:00:26.465272  216074 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1123 09:00:23.774226  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:26.274269  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:26.468271  216074 addons.go:530] duration metric: took 1.379204566s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 09:00:26.552103  216074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-672503" context rescaled to 1 replicas
	W1123 09:00:28.054477  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:30.054656  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:28.774465  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:30.774882  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:32.553443  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:35.054660  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:33.274428  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:35.774260  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:37.554121  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:40.055622  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:38.273771  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:40.773644  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:42.553668  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:44.553840  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:43.273604  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:45.275951  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:47.773735  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:47.054612  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:49.553846  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:49.774526  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:52.273699  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:51.554200  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:54.053723  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:54.274489  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:56.773822  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:57.776587  214550 node_ready.go:49] node "default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:57.776614  214550 node_ready.go:38] duration metric: took 40.005787911s for node "default-k8s-diff-port-118762" to be "Ready" ...
	I1123 09:00:57.776628  214550 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:00:57.776688  214550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:00:57.792566  214550 api_server.go:72] duration metric: took 41.994554549s to wait for apiserver process to appear ...
	I1123 09:00:57.792589  214550 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:00:57.792608  214550 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 09:00:57.801332  214550 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 09:00:57.802591  214550 api_server.go:141] control plane version: v1.34.1
	I1123 09:00:57.802671  214550 api_server.go:131] duration metric: took 10.074405ms to wait for apiserver health ...
	I1123 09:00:57.802696  214550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:00:57.806165  214550 system_pods.go:59] 8 kube-system pods found
	I1123 09:00:57.806249  214550 system_pods.go:61] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:57.806272  214550 system_pods.go:61] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:57.806312  214550 system_pods.go:61] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:57.806336  214550 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:57.806359  214550 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:57.806397  214550 system_pods.go:61] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:57.806420  214550 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:57.806446  214550 system_pods.go:61] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:57.806485  214550 system_pods.go:74] duration metric: took 3.749386ms to wait for pod list to return data ...
	I1123 09:00:57.806513  214550 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:00:57.809265  214550 default_sa.go:45] found service account: "default"
	I1123 09:00:57.809285  214550 default_sa.go:55] duration metric: took 2.751519ms for default service account to be created ...
	I1123 09:00:57.809298  214550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:00:57.811926  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:57.811955  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:57.811962  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:57.811968  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:57.811972  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:57.811977  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:57.811980  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:57.811984  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:57.811991  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:57.812009  214550 retry.go:31] will retry after 274.029839ms: missing components: kube-dns
	I1123 09:00:58.095441  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:58.095474  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:58.095481  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:58.095487  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:58.095491  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:58.095497  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:58.095502  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:58.095506  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:58.095511  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:58.095526  214550 retry.go:31] will retry after 259.858354ms: missing components: kube-dns
	I1123 09:00:58.359494  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:58.359527  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Running
	I1123 09:00:58.359536  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:58.359542  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:58.359546  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:58.359551  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:58.359556  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:58.359560  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:58.359564  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Running
	I1123 09:00:58.359572  214550 system_pods.go:126] duration metric: took 550.268629ms to wait for k8s-apps to be running ...
	I1123 09:00:58.359583  214550 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:00:58.359641  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:58.373607  214550 system_svc.go:56] duration metric: took 14.015669ms WaitForService to wait for kubelet
	I1123 09:00:58.373638  214550 kubeadm.go:587] duration metric: took 42.575629379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:58.373657  214550 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:00:58.376361  214550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:00:58.376394  214550 node_conditions.go:123] node cpu capacity is 2
	I1123 09:00:58.376408  214550 node_conditions.go:105] duration metric: took 2.746055ms to run NodePressure ...
	I1123 09:00:58.376419  214550 start.go:242] waiting for startup goroutines ...
	I1123 09:00:58.376427  214550 start.go:247] waiting for cluster config update ...
	I1123 09:00:58.376438  214550 start.go:256] writing updated cluster config ...
	I1123 09:00:58.376721  214550 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:58.380292  214550 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:58.385153  214550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r5snd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.390028  214550 pod_ready.go:94] pod "coredns-66bc5c9577-r5snd" is "Ready"
	I1123 09:00:58.390067  214550 pod_ready.go:86] duration metric: took 4.884639ms for pod "coredns-66bc5c9577-r5snd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.392315  214550 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.396380  214550 pod_ready.go:94] pod "etcd-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.396450  214550 pod_ready.go:86] duration metric: took 4.109265ms for pod "etcd-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.398716  214550 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.403219  214550 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.403254  214550 pod_ready.go:86] duration metric: took 4.51516ms for pod "kube-apiserver-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.405723  214550 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.785140  214550 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.785167  214550 pod_ready.go:86] duration metric: took 379.369705ms for pod "kube-controller-manager-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.985264  214550 pod_ready.go:83] waiting for pod "kube-proxy-fwc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.387683  214550 pod_ready.go:94] pod "kube-proxy-fwc9v" is "Ready"
	I1123 09:00:59.387712  214550 pod_ready.go:86] duration metric: took 402.417123ms for pod "kube-proxy-fwc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.588360  214550 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.985884  214550 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:59.985910  214550 pod_ready.go:86] duration metric: took 397.484705ms for pod "kube-scheduler-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.985924  214550 pod_ready.go:40] duration metric: took 1.605599928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:00.360876  214550 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:00.365235  214550 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-118762" cluster and "default" namespace by default
	W1123 09:00:56.054171  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:58.059777  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:00.201612  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:02.554079  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:05.054145  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	I1123 09:01:06.553619  216074 node_ready.go:49] node "embed-certs-672503" is "Ready"
	I1123 09:01:06.553653  216074 node_ready.go:38] duration metric: took 40.503031578s for node "embed-certs-672503" to be "Ready" ...
	I1123 09:01:06.553667  216074 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:01:06.553728  216074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:01:06.566313  216074 api_server.go:72] duration metric: took 41.478343311s to wait for apiserver process to appear ...
	I1123 09:01:06.566341  216074 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:01:06.566374  216074 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:01:06.574435  216074 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:01:06.575998  216074 api_server.go:141] control plane version: v1.34.1
	I1123 09:01:06.576024  216074 api_server.go:131] duration metric: took 9.676749ms to wait for apiserver health ...
	I1123 09:01:06.576034  216074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:01:06.579331  216074 system_pods.go:59] 8 kube-system pods found
	I1123 09:01:06.579491  216074 system_pods.go:61] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.579500  216074 system_pods.go:61] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.579506  216074 system_pods.go:61] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.579511  216074 system_pods.go:61] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.579516  216074 system_pods.go:61] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.579524  216074 system_pods.go:61] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.579529  216074 system_pods.go:61] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.579541  216074 system_pods.go:61] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.579548  216074 system_pods.go:74] duration metric: took 3.508309ms to wait for pod list to return data ...
	I1123 09:01:06.579562  216074 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:01:06.584140  216074 default_sa.go:45] found service account: "default"
	I1123 09:01:06.584219  216074 default_sa.go:55] duration metric: took 4.649963ms for default service account to be created ...
	I1123 09:01:06.584244  216074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:01:06.587869  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:06.587906  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.587913  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.587919  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.587923  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.587929  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.587933  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.587938  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.587945  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.587968  216074 retry.go:31] will retry after 247.424175ms: missing components: kube-dns
	I1123 09:01:06.841170  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:06.841208  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.841215  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.841222  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.841227  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.841232  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.841237  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.841241  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.841246  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.841262  216074 retry.go:31] will retry after 283.378756ms: missing components: kube-dns
	I1123 09:01:07.129581  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.129666  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.129688  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.129732  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.129759  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.129784  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.129819  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.129847  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.129870  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.129915  216074 retry.go:31] will retry after 365.111173ms: missing components: kube-dns
	I1123 09:01:07.499321  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.499446  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.499463  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.499471  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.499475  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.499500  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.499508  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.499546  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.499559  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.499580  216074 retry.go:31] will retry after 378.113017ms: missing components: kube-dns
	I1123 09:01:07.881489  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.881535  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.881542  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.881549  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.881554  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.881559  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.881562  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.881566  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.881570  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.881588  216074 retry.go:31] will retry after 690.773315ms: missing components: kube-dns
	I1123 09:01:08.576591  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:08.576623  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Running
	I1123 09:01:08.576630  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:08.576635  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:08.576657  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:08.576662  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:08.576666  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:08.576671  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:08.576676  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:08.576687  216074 system_pods.go:126] duration metric: took 1.992424101s to wait for k8s-apps to be running ...
	I1123 09:01:08.576700  216074 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:01:08.576756  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:01:08.591468  216074 system_svc.go:56] duration metric: took 14.759167ms WaitForService to wait for kubelet
	I1123 09:01:08.591497  216074 kubeadm.go:587] duration metric: took 43.503532438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:01:08.591516  216074 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:01:08.594570  216074 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:01:08.594606  216074 node_conditions.go:123] node cpu capacity is 2
	I1123 09:01:08.594621  216074 node_conditions.go:105] duration metric: took 3.099272ms to run NodePressure ...
	I1123 09:01:08.594634  216074 start.go:242] waiting for startup goroutines ...
	I1123 09:01:08.594642  216074 start.go:247] waiting for cluster config update ...
	I1123 09:01:08.594654  216074 start.go:256] writing updated cluster config ...
	I1123 09:01:08.594942  216074 ssh_runner.go:195] Run: rm -f paused
	I1123 09:01:08.598542  216074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:08.602701  216074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nhnbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.608070  216074 pod_ready.go:94] pod "coredns-66bc5c9577-nhnbc" is "Ready"
	I1123 09:01:08.608097  216074 pod_ready.go:86] duration metric: took 5.358349ms for pod "coredns-66bc5c9577-nhnbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.610514  216074 pod_ready.go:83] waiting for pod "etcd-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.615555  216074 pod_ready.go:94] pod "etcd-embed-certs-672503" is "Ready"
	I1123 09:01:08.615582  216074 pod_ready.go:86] duration metric: took 5.042688ms for pod "etcd-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.618015  216074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.624626  216074 pod_ready.go:94] pod "kube-apiserver-embed-certs-672503" is "Ready"
	I1123 09:01:08.624654  216074 pod_ready.go:86] duration metric: took 6.607794ms for pod "kube-apiserver-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.632607  216074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.003276  216074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-672503" is "Ready"
	I1123 09:01:09.003305  216074 pod_ready.go:86] duration metric: took 370.669957ms for pod "kube-controller-manager-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.204229  216074 pod_ready.go:83] waiting for pod "kube-proxy-wbnjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.603471  216074 pod_ready.go:94] pod "kube-proxy-wbnjd" is "Ready"
	I1123 09:01:09.603500  216074 pod_ready.go:86] duration metric: took 399.242725ms for pod "kube-proxy-wbnjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.802674  216074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:10.203777  216074 pod_ready.go:94] pod "kube-scheduler-embed-certs-672503" is "Ready"
	I1123 09:01:10.203816  216074 pod_ready.go:86] duration metric: took 401.074978ms for pod "kube-scheduler-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:10.203830  216074 pod_ready.go:40] duration metric: took 1.605254448s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:10.258134  216074 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:10.261593  216074 out.go:179] * Done! kubectl is now configured to use "embed-certs-672503" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a3e2432a727b8       1611cd07b61d5       8 seconds ago        Running             busybox                   0                   e9e2afc03331d       busybox                                      default
	08e6a055c156c       138784d87c9c5       14 seconds ago       Running             coredns                   0                   0efa056b5977b       coredns-66bc5c9577-nhnbc                     kube-system
	ce730c79fdfcd       ba04bb24b9575       14 seconds ago       Running             storage-provisioner       0                   16618d3617fc6       storage-provisioner                          kube-system
	a022c95c6ebf7       05baa95f5142d       55 seconds ago       Running             kube-proxy                0                   bfd4c60efc25c       kube-proxy-wbnjd                             kube-system
	e2138f60728ce       b1a8c6f707935       56 seconds ago       Running             kindnet-cni               0                   bd22ae2b49f13       kindnet-crv85                                kube-system
	48228be1d3006       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   fd7dcc8602f94       kube-scheduler-embed-certs-672503            kube-system
	b631bc0f28a0e       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   42ca9f105eec7       kube-controller-manager-embed-certs-672503   kube-system
	6935bf91c2b5a       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   ecb38543bb5c1       kube-apiserver-embed-certs-672503            kube-system
	2e1658439e000       a1894772a478e       About a minute ago   Running             etcd                      0                   6c1565868f1b0       etcd-embed-certs-672503                      kube-system
	
	
	==> containerd <==
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.815212530Z" level=info msg="CreateContainer within sandbox \"16618d3617fc629dd2352928e691cbaa9fd1bc5b3bd90d3d653de341bcc6da8c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f\""
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.816441922Z" level=info msg="StartContainer for \"ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f\""
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.818027263Z" level=info msg="connecting to shim ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f" address="unix:///run/containerd/s/625114e08a76d737f0d90db6f646eacf896fbbc0972839725c46af7a526025c1" protocol=ttrpc version=3
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.827248574Z" level=info msg="Container 08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.838431894Z" level=info msg="CreateContainer within sandbox \"0efa056b5977b2dff1b7d3d96f8f33f9675eb74d4e8448a776071d2258e3b7cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a\""
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.844803306Z" level=info msg="StartContainer for \"08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a\""
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.848065353Z" level=info msg="connecting to shim 08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a" address="unix:///run/containerd/s/876e08f4bcdec2da65ded68501c79fc31841999ab5a293a8b5144b4ad6668604" protocol=ttrpc version=3
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.945320893Z" level=info msg="StartContainer for \"ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f\" returns successfully"
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.961785491Z" level=info msg="StartContainer for \"08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a\" returns successfully"
	Nov 23 09:01:10 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:10.814686791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b98565e7-4d04-4d9a-b95e-186c353129dc,Namespace:default,Attempt:0,}"
	Nov 23 09:01:10 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:10.899816738Z" level=info msg="connecting to shim e9e2afc03331d6e4e3d71be190c54611a94dda353e7080b864a9b5480bc638d0" address="unix:///run/containerd/s/c5b23ad017fbe6e680412e4829b91861d00735239e2b9406dc4919aaca456cb8" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:01:10 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:10.988261154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b98565e7-4d04-4d9a-b95e-186c353129dc,Namespace:default,Attempt:0,} returns sandbox id \"e9e2afc03331d6e4e3d71be190c54611a94dda353e7080b864a9b5480bc638d0\""
	Nov 23 09:01:10 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:10.990988666Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.278729136Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.280723317Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.283476913Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.287585473Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.288400716Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.297364853s"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.288448093Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.298691131Z" level=info msg="CreateContainer within sandbox \"e9e2afc03331d6e4e3d71be190c54611a94dda353e7080b864a9b5480bc638d0\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.321270229Z" level=info msg="Container a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.331726069Z" level=info msg="CreateContainer within sandbox \"e9e2afc03331d6e4e3d71be190c54611a94dda353e7080b864a9b5480bc638d0\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d\""
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.334224771Z" level=info msg="StartContainer for \"a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d\""
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.335767773Z" level=info msg="connecting to shim a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d" address="unix:///run/containerd/s/c5b23ad017fbe6e680412e4829b91861d00735239e2b9406dc4919aaca456cb8" protocol=ttrpc version=3
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.435533261Z" level=info msg="StartContainer for \"a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d\" returns successfully"
	
	
	==> coredns [08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53163 - 48004 "HINFO IN 4256419541080546424.6439688394332634916. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034440699s
	
	
	==> describe nodes <==
	Name:               embed-certs-672503
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-672503
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-672503
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_00_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:00:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-672503
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:01:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:01:21 +0000   Sun, 23 Nov 2025 09:00:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:01:21 +0000   Sun, 23 Nov 2025 09:00:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:01:21 +0000   Sun, 23 Nov 2025 09:00:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:01:21 +0000   Sun, 23 Nov 2025 09:01:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-672503
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                fc675532-bd47-4c37-8a40-91e311d7dcb4
	  Boot ID:                    86d8501c-1df5-4d7e-90cb-d9ad951202c5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-nhnbc                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     57s
	  kube-system                 etcd-embed-certs-672503                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         61s
	  kube-system                 kindnet-crv85                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      57s
	  kube-system                 kube-apiserver-embed-certs-672503             250m (12%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-controller-manager-embed-certs-672503    200m (10%)    0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 kube-proxy-wbnjd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	  kube-system                 kube-scheduler-embed-certs-672503             100m (5%)     0 (0%)      0 (0%)           0 (0%)         61s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 55s                kube-proxy       
	  Normal   NodeAllocatableEnforced  74s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 74s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  74s (x8 over 74s)  kubelet          Node embed-certs-672503 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    74s (x8 over 74s)  kubelet          Node embed-certs-672503 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     74s (x7 over 74s)  kubelet          Node embed-certs-672503 status is now: NodeHasSufficientPID
	  Normal   Starting                 74s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 62s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 62s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  62s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  61s                kubelet          Node embed-certs-672503 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    61s                kubelet          Node embed-certs-672503 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     61s                kubelet          Node embed-certs-672503 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           58s                node-controller  Node embed-certs-672503 event: Registered Node embed-certs-672503 in Controller
	  Normal   NodeReady                15s                kubelet          Node embed-certs-672503 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014670] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505841] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033008] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738583] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.057424] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:26] hrtimer: interrupt took 58442338 ns
	
	
	==> etcd [2e1658439e00054d4c123a0704ae2372f64da746c298df15a9d59f81c23e7dcc] <==
	{"level":"warn","ts":"2025-11-23T09:00:13.506766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.575570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.619275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.655545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.671823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.698869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.722370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.753385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.786144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.803583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.834613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.938415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.939626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.989516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.041149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.072867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.090793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.108934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.138745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.164090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.182525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.219717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.248176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.333400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.475947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59890","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:01:21 up  1:43,  0 user,  load average: 2.45, 3.41, 2.94
	Linux embed-certs-672503 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2138f60728ce59a3b0b07284a407bcd8d065696f74d0e865f40f4d1b3de6a8a] <==
	I1123 09:00:25.929202       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:00:25.929538       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:00:25.929650       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:00:25.929662       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:00:25.929676       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:00:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:00:26.220364       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:00:26.220463       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:00:26.220542       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:00:26.221165       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:00:56.221514       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 09:00:56.221529       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:00:56.221654       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 09:00:56.222831       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 09:00:57.722105       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:00:57.722183       1 metrics.go:72] Registering metrics
	I1123 09:00:57.722269       1 controller.go:711] "Syncing nftables rules"
	I1123 09:01:06.220792       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:01:06.220854       1 main.go:301] handling current node
	I1123 09:01:16.220035       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:01:16.220090       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6935bf91c2b5a9bd9a3a879a561a5fe8b7706d73efed8292fb9a15b2b1fb8bd9] <==
	I1123 09:00:16.112571       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:00:16.112892       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:00:16.114026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:00:16.120153       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:00:16.124642       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:00:16.140135       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:16.172282       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:00:16.648341       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:00:16.688557       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:00:16.688586       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:00:18.362210       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:00:18.471094       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:00:18.640445       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:00:18.648637       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 09:00:18.649929       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:00:18.655796       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:00:18.757483       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:00:19.744392       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:00:19.769130       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:00:19.786013       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:00:24.560804       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:24.566099       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:24.724871       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:00:24.858206       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:01:20.648114       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:58014: use of closed network connection
	
	
	==> kube-controller-manager [b631bc0f28a0e89a9a6d9e7776f78ca994bfb0ae27f75a8fd29f7e8d18f46472] <==
	I1123 09:00:23.755767       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:00:23.755875       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:00:23.756107       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:00:23.757396       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:00:23.757501       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:00:23.757542       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:00:23.762191       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:23.765833       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:00:23.765993       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:23.766081       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:23.766145       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:00:23.766154       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:00:23.777111       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:00:23.789822       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:00:23.796131       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:23.801324       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:00:23.802690       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:00:23.802904       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:00:23.803049       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:00:23.803330       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:00:23.804767       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:00:23.805132       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:00:23.805297       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:00:23.812393       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:01:08.759180       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a022c95c6ebf7ec165890e7afb9f737a74e7d629a3e09999147b89095bfe6217] <==
	I1123 09:00:26.194401       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:00:26.297289       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:00:26.398138       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:00:26.398174       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:00:26.398268       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:00:26.449287       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:00:26.449344       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:00:26.460756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:00:26.461111       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:00:26.461126       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:26.468828       1 config.go:200] "Starting service config controller"
	I1123 09:00:26.469683       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:00:26.469742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:00:26.471028       1 config.go:309] "Starting node config controller"
	I1123 09:00:26.471054       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:00:26.471061       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:00:26.469682       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:00:26.469653       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:00:26.471585       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:00:26.570181       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:00:26.572439       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:00:26.572440       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [48228be1d30060a20acf9df4afdfb84a0d717b1726f690802f7837d337f1f24b] <==
	I1123 09:00:13.497947       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:00:18.006941       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:00:18.006989       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:00:18.007001       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:00:18.007012       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:00:18.097461       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:00:18.097503       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:18.100896       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:00:18.100993       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:18.101017       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:18.101037       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 09:00:18.167602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 09:00:19.201750       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:00:20 embed-certs-672503 kubelet[1478]: I1123 09:00:20.862317    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-672503" podStartSLOduration=0.862301647 podStartE2EDuration="862.301647ms" podCreationTimestamp="2025-11-23 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:20.861796937 +0000 UTC m=+1.273038194" watchObservedRunningTime="2025-11-23 09:00:20.862301647 +0000 UTC m=+1.273542886"
	Nov 23 09:00:20 embed-certs-672503 kubelet[1478]: I1123 09:00:20.895037    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-672503" podStartSLOduration=0.895017588 podStartE2EDuration="895.017588ms" podCreationTimestamp="2025-11-23 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:20.878793489 +0000 UTC m=+1.290034737" watchObservedRunningTime="2025-11-23 09:00:20.895017588 +0000 UTC m=+1.306258828"
	Nov 23 09:00:20 embed-certs-672503 kubelet[1478]: I1123 09:00:20.925264    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-672503" podStartSLOduration=0.925244478 podStartE2EDuration="925.244478ms" podCreationTimestamp="2025-11-23 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:20.895636686 +0000 UTC m=+1.306877926" watchObservedRunningTime="2025-11-23 09:00:20.925244478 +0000 UTC m=+1.336485726"
	Nov 23 09:00:21 embed-certs-672503 kubelet[1478]: I1123 09:00:21.351814    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-672503" podStartSLOduration=1.35179488 podStartE2EDuration="1.35179488s" podCreationTimestamp="2025-11-23 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:20.926301864 +0000 UTC m=+1.337543104" watchObservedRunningTime="2025-11-23 09:00:21.35179488 +0000 UTC m=+1.763036128"
	Nov 23 09:00:23 embed-certs-672503 kubelet[1478]: I1123 09:00:23.782299    1478 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:00:23 embed-certs-672503 kubelet[1478]: I1123 09:00:23.783079    1478 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066648    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee0e8846-0f87-4847-a24a-d55ed9cf2c0d-xtables-lock\") pod \"kindnet-crv85\" (UID: \"ee0e8846-0f87-4847-a24a-d55ed9cf2c0d\") " pod="kube-system/kindnet-crv85"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066804    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ad92875-26b3-43b9-8680-17253a8d35d2-kube-proxy\") pod \"kube-proxy-wbnjd\" (UID: \"9ad92875-26b3-43b9-8680-17253a8d35d2\") " pod="kube-system/kube-proxy-wbnjd"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066831    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad92875-26b3-43b9-8680-17253a8d35d2-xtables-lock\") pod \"kube-proxy-wbnjd\" (UID: \"9ad92875-26b3-43b9-8680-17253a8d35d2\") " pod="kube-system/kube-proxy-wbnjd"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066896    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ee0e8846-0f87-4847-a24a-d55ed9cf2c0d-cni-cfg\") pod \"kindnet-crv85\" (UID: \"ee0e8846-0f87-4847-a24a-d55ed9cf2c0d\") " pod="kube-system/kindnet-crv85"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066961    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jpmf\" (UniqueName: \"kubernetes.io/projected/9ad92875-26b3-43b9-8680-17253a8d35d2-kube-api-access-6jpmf\") pod \"kube-proxy-wbnjd\" (UID: \"9ad92875-26b3-43b9-8680-17253a8d35d2\") " pod="kube-system/kube-proxy-wbnjd"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.067024    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvl6w\" (UniqueName: \"kubernetes.io/projected/ee0e8846-0f87-4847-a24a-d55ed9cf2c0d-kube-api-access-fvl6w\") pod \"kindnet-crv85\" (UID: \"ee0e8846-0f87-4847-a24a-d55ed9cf2c0d\") " pod="kube-system/kindnet-crv85"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.067045    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad92875-26b3-43b9-8680-17253a8d35d2-lib-modules\") pod \"kube-proxy-wbnjd\" (UID: \"9ad92875-26b3-43b9-8680-17253a8d35d2\") " pod="kube-system/kube-proxy-wbnjd"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.067064    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee0e8846-0f87-4847-a24a-d55ed9cf2c0d-lib-modules\") pod \"kindnet-crv85\" (UID: \"ee0e8846-0f87-4847-a24a-d55ed9cf2c0d\") " pod="kube-system/kindnet-crv85"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.183522    1478 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.873934    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-crv85" podStartSLOduration=1.873916832 podStartE2EDuration="1.873916832s" podCreationTimestamp="2025-11-23 09:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:25.873584268 +0000 UTC m=+6.284825508" watchObservedRunningTime="2025-11-23 09:00:25.873916832 +0000 UTC m=+6.285158080"
	Nov 23 09:00:26 embed-certs-672503 kubelet[1478]: I1123 09:00:26.851922    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wbnjd" podStartSLOduration=2.8519019979999998 podStartE2EDuration="2.851901998s" podCreationTimestamp="2025-11-23 09:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:26.851849075 +0000 UTC m=+7.263090331" watchObservedRunningTime="2025-11-23 09:00:26.851901998 +0000 UTC m=+7.263143238"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.276846    1478 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.524588    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47a7c798-9292-4915-96ab-78980671decb-config-volume\") pod \"coredns-66bc5c9577-nhnbc\" (UID: \"47a7c798-9292-4915-96ab-78980671decb\") " pod="kube-system/coredns-66bc5c9577-nhnbc"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.524656    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47f41b96-311e-4020-87db-b84c42d71ba8-tmp\") pod \"storage-provisioner\" (UID: \"47f41b96-311e-4020-87db-b84c42d71ba8\") " pod="kube-system/storage-provisioner"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.524680    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6vz8\" (UniqueName: \"kubernetes.io/projected/47f41b96-311e-4020-87db-b84c42d71ba8-kube-api-access-l6vz8\") pod \"storage-provisioner\" (UID: \"47f41b96-311e-4020-87db-b84c42d71ba8\") " pod="kube-system/storage-provisioner"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.524705    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkl6\" (UniqueName: \"kubernetes.io/projected/47a7c798-9292-4915-96ab-78980671decb-kube-api-access-5lkl6\") pod \"coredns-66bc5c9577-nhnbc\" (UID: \"47a7c798-9292-4915-96ab-78980671decb\") " pod="kube-system/coredns-66bc5c9577-nhnbc"
	Nov 23 09:01:07 embed-certs-672503 kubelet[1478]: I1123 09:01:07.992169    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.992139361 podStartE2EDuration="41.992139361s" podCreationTimestamp="2025-11-23 09:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:01:07.001137529 +0000 UTC m=+47.412378769" watchObservedRunningTime="2025-11-23 09:01:07.992139361 +0000 UTC m=+48.403380601"
	Nov 23 09:01:08 embed-certs-672503 kubelet[1478]: I1123 09:01:08.015184    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nhnbc" podStartSLOduration=44.015149346 podStartE2EDuration="44.015149346s" podCreationTimestamp="2025-11-23 09:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:01:07.993329803 +0000 UTC m=+48.404571059" watchObservedRunningTime="2025-11-23 09:01:08.015149346 +0000 UTC m=+48.426390586"
	Nov 23 09:01:10 embed-certs-672503 kubelet[1478]: I1123 09:01:10.652632    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ws9\" (UniqueName: \"kubernetes.io/projected/b98565e7-4d04-4d9a-b95e-186c353129dc-kube-api-access-77ws9\") pod \"busybox\" (UID: \"b98565e7-4d04-4d9a-b95e-186c353129dc\") " pod="default/busybox"
	
	
	==> storage-provisioner [ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f] <==
	I1123 09:01:06.934305       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:01:06.949383       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:01:06.949523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:01:06.965358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:07.011715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:01:07.011972       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:01:07.013386       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a51e74d1-c070-46d1-896d-b299af8b25af", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-672503_186aa864-b68f-4600-b9f5-1419bffbdf2a became leader
	I1123 09:01:07.015437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-672503_186aa864-b68f-4600-b9f5-1419bffbdf2a!
	W1123 09:01:07.023097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:07.030111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:01:07.116226       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-672503_186aa864-b68f-4600-b9f5-1419bffbdf2a!
	W1123 09:01:09.033080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:09.038348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:11.058465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:11.073084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:13.076301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:13.084900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:15.092712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:15.103934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:17.108483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:17.119699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.123426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.131197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:21.135384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:21.142120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-672503 -n embed-certs-672503
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-672503 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-672503
helpers_test.go:243: (dbg) docker inspect embed-certs-672503:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a",
	        "Created": "2025-11-23T08:59:46.1804136Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 217101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:59:46.242545258Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a/hostname",
	        "HostsPath": "/var/lib/docker/containers/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a/hosts",
	        "LogPath": "/var/lib/docker/containers/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a/3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a-json.log",
	        "Name": "/embed-certs-672503",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-672503:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-672503",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3da4c63ab75a910afca460c2155eeb60a452c3826790dea77ea8a4a2ae3d239a",
	                "LowerDir": "/var/lib/docker/overlay2/d9e813299ad3056c7933101be61b4b41ca4cfef00363799af7d026e628e5e44c-init/diff:/var/lib/docker/overlay2/e1de88c117c0c773e1fa636243190fd97eadaa5a8e1ee08fd53827cbac767d35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d9e813299ad3056c7933101be61b4b41ca4cfef00363799af7d026e628e5e44c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d9e813299ad3056c7933101be61b4b41ca4cfef00363799af7d026e628e5e44c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d9e813299ad3056c7933101be61b4b41ca4cfef00363799af7d026e628e5e44c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-672503",
	                "Source": "/var/lib/docker/volumes/embed-certs-672503/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-672503",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-672503",
	                "name.minikube.sigs.k8s.io": "embed-certs-672503",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "efae4430f14a822cd937193977eb629d5980941044bed0c01d3489be3d3dd295",
	            "SandboxKey": "/var/run/docker/netns/efae4430f14a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-672503": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:27:92:d2:91:34",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f1c865c13f589ba7deeafc84c206cf7e759a774dbe5f964667b108e41ea38191",
	                    "EndpointID": "862bc3ada9b10aa54a8f695ed9bac3aea632e7a3002849c5c6b6714677787b6e",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-672503",
	                        "3da4c63ab75a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-672503 -n embed-certs-672503
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-672503 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-672503 logs -n 25: (1.192746747s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p force-systemd-env-023309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p kubernetes-upgrade-291582                                                                                                                                                                                                                        │ kubernetes-upgrade-291582    │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:55 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:55 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ force-systemd-env-023309 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p force-systemd-env-023309                                                                                                                                                                                                                         │ force-systemd-env-023309     │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p cert-options-886452 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ cert-options-886452 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ ssh     │ -p cert-options-886452 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ delete  │ -p cert-options-886452                                                                                                                                                                                                                              │ cert-options-886452          │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:56 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:56 UTC │ 23 Nov 25 08:57 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-132097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ stop    │ -p old-k8s-version-132097 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-132097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:58 UTC │
	│ start   │ -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:58 UTC │ 23 Nov 25 08:59 UTC │
	│ image   │ old-k8s-version-132097 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ pause   │ -p old-k8s-version-132097 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ unpause │ -p old-k8s-version-132097 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p old-k8s-version-132097                                                                                                                                                                                                                           │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p cert-expiration-918102 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ delete  │ -p old-k8s-version-132097                                                                                                                                                                                                                           │ old-k8s-version-132097       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p default-k8s-diff-port-118762 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p cert-expiration-918102                                                                                                                                                                                                                           │ cert-expiration-918102       │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 08:59 UTC │
	│ start   │ -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 08:59 UTC │ 23 Nov 25 09:01 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-118762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ stop    │ -p default-k8s-diff-port-118762 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:59:40
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:59:40.577485  216074 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:59:40.577691  216074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:40.577718  216074 out.go:374] Setting ErrFile to fd 2...
	I1123 08:59:40.577739  216074 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:40.578089  216074 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:59:40.578573  216074 out.go:368] Setting JSON to false
	I1123 08:59:40.579525  216074 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6133,"bootTime":1763882248,"procs":164,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:59:40.579625  216074 start.go:143] virtualization:  
	I1123 08:59:40.583259  216074 out.go:179] * [embed-certs-672503] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:59:40.587830  216074 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:59:40.587967  216074 notify.go:221] Checking for updates...
	I1123 08:59:40.594558  216074 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:59:40.597788  216074 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:59:40.601027  216074 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:59:40.604233  216074 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:59:40.607539  216074 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:59:40.611140  216074 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:40.611247  216074 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:59:40.656282  216074 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:59:40.656413  216074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:40.752458  216074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:44 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:59:40.738300735 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:40.752566  216074 docker.go:319] overlay module found
	I1123 08:59:40.756622  216074 out.go:179] * Using the docker driver based on user configuration
	I1123 08:59:40.759788  216074 start.go:309] selected driver: docker
	I1123 08:59:40.759810  216074 start.go:927] validating driver "docker" against <nil>
	I1123 08:59:40.759823  216074 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:59:40.760559  216074 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:40.840879  216074 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:true NGoroutines:54 SystemTime:2025-11-23 08:59:40.831791559 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:59:40.841036  216074 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:59:40.841265  216074 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:59:40.844487  216074 out.go:179] * Using Docker driver with root privileges
	I1123 08:59:40.847551  216074 cni.go:84] Creating CNI manager for ""
	I1123 08:59:40.847624  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:40.847640  216074 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:59:40.847726  216074 start.go:353] cluster config:
	{Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:40.850947  216074 out.go:179] * Starting "embed-certs-672503" primary control-plane node in "embed-certs-672503" cluster
	I1123 08:59:40.853960  216074 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:59:40.856924  216074 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:59:40.859875  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:40.859924  216074 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:59:40.859933  216074 cache.go:65] Caching tarball of preloaded images
	I1123 08:59:40.859968  216074 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:59:40.860013  216074 preload.go:238] Found /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 08:59:40.860024  216074 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:59:40.860143  216074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json ...
	I1123 08:59:40.860163  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json: {Name:mkb81d39d58a71dac5e98d24c241cff9b78e273e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:40.879736  216074 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 08:59:40.879759  216074 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 08:59:40.879779  216074 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:59:40.879808  216074 start.go:360] acquireMachinesLock for embed-certs-672503: {Name:mk52b3d46d7a43264b4677c9fc6abfc0706853fe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:59:40.879915  216074 start.go:364] duration metric: took 86.869µs to acquireMachinesLock for "embed-certs-672503"
	I1123 08:59:40.879944  216074 start.go:93] Provisioning new machine with config: &{Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 08:59:40.880019  216074 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:59:39.039954  214550 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-118762:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.007752645s)
	I1123 08:59:39.039991  214550 kic.go:203] duration metric: took 5.007913738s to extract preloaded images to volume ...
	W1123 08:59:39.040149  214550 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:39.040271  214550 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:39.103132  214550 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-118762 --name default-k8s-diff-port-118762 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-118762 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-118762 --network default-k8s-diff-port-118762 --ip 192.168.85.2 --volume default-k8s-diff-port-118762:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:39.606571  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Running}}
	I1123 08:59:39.652908  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:39.675600  214550 cli_runner.go:164] Run: docker exec default-k8s-diff-port-118762 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:39.805153  214550 oci.go:144] the created container "default-k8s-diff-port-118762" has a running status.
	I1123 08:59:39.805181  214550 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa...
	I1123 08:59:40.603002  214550 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:40.646836  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:40.670926  214550 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:40.670945  214550 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-118762 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:40.744487  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 08:59:40.770445  214550 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:40.770539  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:40.791316  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:40.791758  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:40.791772  214550 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:40.792437  214550 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51880->127.0.0.1:33064: read: connection reset by peer
	I1123 08:59:40.883578  216074 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 08:59:40.883819  216074 start.go:159] libmachine.API.Create for "embed-certs-672503" (driver="docker")
	I1123 08:59:40.883864  216074 client.go:173] LocalClient.Create starting
	I1123 08:59:40.883946  216074 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem
	I1123 08:59:40.883982  216074 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:40.884002  216074 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:40.884067  216074 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem
	I1123 08:59:40.884090  216074 main.go:143] libmachine: Decoding PEM data...
	I1123 08:59:40.884109  216074 main.go:143] libmachine: Parsing certificate...
	I1123 08:59:40.884452  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:59:40.900264  216074 cli_runner.go:211] docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:59:40.900362  216074 network_create.go:284] running [docker network inspect embed-certs-672503] to gather additional debugging logs...
	I1123 08:59:40.900388  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503
	W1123 08:59:40.916918  216074 cli_runner.go:211] docker network inspect embed-certs-672503 returned with exit code 1
	I1123 08:59:40.916950  216074 network_create.go:287] error running [docker network inspect embed-certs-672503]: docker network inspect embed-certs-672503: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-672503 not found
	I1123 08:59:40.916965  216074 network_create.go:289] output of [docker network inspect embed-certs-672503]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-672503 not found
	
	** /stderr **
	I1123 08:59:40.917065  216074 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:40.933652  216074 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ab12b2c3b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:c9:6d:7b:80:76} reservation:<nil>}
	I1123 08:59:40.933989  216074 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7f5e4a52a57c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:1a:79:b2:02:66} reservation:<nil>}
	I1123 08:59:40.934307  216074 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed031858d624 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:47:7d:04:56:4a} reservation:<nil>}
	I1123 08:59:40.934717  216074 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c7270}
	I1123 08:59:40.934741  216074 network_create.go:124] attempt to create docker network embed-certs-672503 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 08:59:40.934796  216074 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-672503 embed-certs-672503
	I1123 08:59:40.992310  216074 network_create.go:108] docker network embed-certs-672503 192.168.76.0/24 created
	I1123 08:59:40.992345  216074 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-672503" container
	I1123 08:59:40.992424  216074 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:59:41.010086  216074 cli_runner.go:164] Run: docker volume create embed-certs-672503 --label name.minikube.sigs.k8s.io=embed-certs-672503 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:59:41.028903  216074 oci.go:103] Successfully created a docker volume embed-certs-672503
	I1123 08:59:41.029006  216074 cli_runner.go:164] Run: docker run --rm --name embed-certs-672503-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-672503 --entrypoint /usr/bin/test -v embed-certs-672503:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:59:41.597394  216074 oci.go:107] Successfully prepared a docker volume embed-certs-672503
	I1123 08:59:41.597456  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:41.597467  216074 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:59:41.597532  216074 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-672503:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:59:43.963549  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-118762
	
	I1123 08:59:43.963629  214550 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-118762"
	I1123 08:59:43.963730  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:43.982067  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:43.982376  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:43.982388  214550 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-118762 && echo "default-k8s-diff-port-118762" | sudo tee /etc/hostname
	I1123 08:59:44.162438  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-118762
	
	I1123 08:59:44.162524  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.184402  214550 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:44.184717  214550 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33064 <nil> <nil>}
	I1123 08:59:44.184743  214550 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-118762' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-118762/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-118762' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:44.387688  214550 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:44.387725  214550 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:59:44.387751  214550 ubuntu.go:190] setting up certificates
	I1123 08:59:44.387761  214550 provision.go:84] configureAuth start
	I1123 08:59:44.387823  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.406977  214550 provision.go:143] copyHostCerts
	I1123 08:59:44.407043  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:59:44.407056  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:59:44.407135  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:59:44.407247  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:59:44.407259  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:59:44.407287  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:59:44.407420  214550 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:59:44.407449  214550 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:59:44.407501  214550 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:59:44.407571  214550 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-118762 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-118762 localhost minikube]
	I1123 08:59:44.485276  214550 provision.go:177] copyRemoteCerts
	I1123 08:59:44.485399  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:44.485475  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.502836  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.611676  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 08:59:44.631601  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:59:44.649182  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:59:44.666321  214550 provision.go:87] duration metric: took 278.533612ms to configureAuth
	I1123 08:59:44.666344  214550 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:44.666518  214550 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:44.666526  214550 machine.go:97] duration metric: took 3.896062717s to provisionDockerMachine
	I1123 08:59:44.666532  214550 client.go:176] duration metric: took 11.505696925s to LocalClient.Create
	I1123 08:59:44.666546  214550 start.go:167] duration metric: took 11.505763117s to libmachine.API.Create "default-k8s-diff-port-118762"
	I1123 08:59:44.666552  214550 start.go:293] postStartSetup for "default-k8s-diff-port-118762" (driver="docker")
	I1123 08:59:44.666561  214550 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:44.666612  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:44.666651  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.683801  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.791506  214550 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:44.795326  214550 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:44.795375  214550 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:44.795403  214550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:59:44.795479  214550 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:59:44.795605  214550 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:59:44.795716  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:44.804406  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:44.824224  214550 start.go:296] duration metric: took 157.657779ms for postStartSetup
	I1123 08:59:44.824627  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.842791  214550 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/config.json ...
	I1123 08:59:44.845272  214550 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:44.845334  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.870817  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:44.973574  214550 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:44.978835  214550 start.go:128] duration metric: took 11.821803269s to createHost
	I1123 08:59:44.978859  214550 start.go:83] releasing machines lock for "default-k8s-diff-port-118762", held for 11.821970245s
	I1123 08:59:44.978934  214550 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-118762
	I1123 08:59:44.996375  214550 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:44.996410  214550 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:44.996429  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:44.997293  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 08:59:45.019323  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:45.019748  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 08:59:45.266005  214550 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:45.276798  214550 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:45.286312  214550 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:45.286509  214550 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:45.400996  214550 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:45.401066  214550 start.go:496] detecting cgroup driver to use...
	I1123 08:59:45.401106  214550 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:45.401166  214550 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:59:45.416740  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:59:45.430174  214550 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:45.430277  214550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:45.449266  214550 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:45.468575  214550 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:45.593366  214550 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:45.727407  214550 docker.go:234] disabling docker service ...
	I1123 08:59:45.727524  214550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:45.750566  214550 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:45.763685  214550 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:45.882473  214550 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:46.015128  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:46.029863  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:46.051000  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:59:46.067292  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:59:46.081288  214550 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:59:46.081404  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:59:46.100139  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:46.120619  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:59:46.133469  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:46.142574  214550 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:46.152921  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:59:46.164064  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:59:46.173191  214550 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:59:46.188341  214550 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:46.201637  214550 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:46.214012  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:46.386854  214550 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:59:46.574017  214550 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:59:46.574082  214550 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:59:46.590863  214550 start.go:564] Will wait 60s for crictl version
	I1123 08:59:46.590924  214550 ssh_runner.go:195] Run: which crictl
	I1123 08:59:46.596219  214550 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:46.641889  214550 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:59:46.641953  214550 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:46.715861  214550 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:46.799546  214550 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:59:46.802513  214550 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-118762 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:46.830038  214550 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:46.834203  214550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:46.850678  214550 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-118762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:46.850809  214550 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:46.850885  214550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:46.899220  214550 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:46.899242  214550 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:59:46.899304  214550 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:46.940637  214550 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:46.940658  214550 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:46.940666  214550 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1123 08:59:46.940760  214550 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-118762 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:46.941123  214550 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:59:47.001942  214550 cni.go:84] Creating CNI manager for ""
	I1123 08:59:47.001962  214550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:47.001977  214550 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:59:47.002000  214550 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-118762 NodeName:default-k8s-diff-port-118762 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:47.002115  214550 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-118762"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:47.002179  214550 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:47.020644  214550 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:47.020704  214550 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:47.037002  214550 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1123 08:59:47.055802  214550 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:47.079429  214550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1123 08:59:47.092521  214550 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:47.096917  214550 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:47.106392  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:47.305463  214550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:47.337722  214550 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762 for IP: 192.168.85.2
	I1123 08:59:47.337739  214550 certs.go:195] generating shared ca certs ...
	I1123 08:59:47.337754  214550 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.337885  214550 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:59:47.337928  214550 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:59:47.337936  214550 certs.go:257] generating profile certs ...
	I1123 08:59:47.337988  214550 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key
	I1123 08:59:47.337997  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt with IP's: []
	I1123 08:59:47.952908  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt ...
	I1123 08:59:47.952991  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt: {Name:mkf95cd7f0813a939fc5a10b868018298b21adb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.953216  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key ...
	I1123 08:59:47.953254  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.key: {Name:mkf9a2acc2c42bd0a0cf1a1f2787b6cd46ba4f2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:47.953415  214550 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca
	I1123 08:59:47.953453  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1123 08:59:48.203697  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca ...
	I1123 08:59:48.203769  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca: {Name:mk05909547f3239afc9409b846b3fb486118a441 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.203987  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca ...
	I1123 08:59:48.204023  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca: {Name:mkec035b62be2e775b2f0c85ff409f77aebf0a44 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.204156  214550 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt.4eb9e2ca -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt
	I1123 08:59:48.204271  214550 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key.4eb9e2ca -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key
	I1123 08:59:48.204380  214550 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key
	I1123 08:59:48.204418  214550 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt with IP's: []
	I1123 08:59:48.359177  214550 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt ...
	I1123 08:59:48.359211  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt: {Name:mkf91279fb6f4fe072e258fdea87868d2840f420 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.359412  214550 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key ...
	I1123 08:59:48.359429  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key: {Name:mkbf74023435808035706f9a2ad6638168a8a889 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:48.359663  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:59:48.359708  214550 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:48.359723  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:48.359753  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:59:48.359783  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:48.359810  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:59:48.359858  214550 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:48.360416  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:48.379912  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:59:48.398946  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:48.417150  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:59:48.434559  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 08:59:48.452066  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:59:48.470350  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:48.488326  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:59:48.506336  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:59:48.524422  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:59:48.541642  214550 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:48.559509  214550 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:48.572933  214550 ssh_runner.go:195] Run: openssl version
	I1123 08:59:48.579412  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:59:48.588035  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.591879  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.591946  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:59:48.633205  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:48.641796  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:48.650209  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.654132  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.654249  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:48.695982  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:48.704319  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:59:48.712849  214550 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.716712  214550 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.716781  214550 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:59:48.757938  214550 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:48.766377  214550 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:48.769975  214550 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:48.770030  214550 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-118762 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-118762 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLo
g:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:48.770114  214550 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:48.770174  214550 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:48.795754  214550 cri.go:89] found id: ""
	I1123 08:59:48.795881  214550 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:48.803757  214550 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:48.811647  214550 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:48.811743  214550 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:48.819712  214550 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:48.819733  214550 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:48.819805  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 08:59:48.827458  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:48.827560  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:48.835278  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 08:59:48.843241  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:48.843395  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:48.850790  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 08:59:48.859021  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:48.859145  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:48.866723  214550 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 08:59:48.874202  214550 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:48.874315  214550 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:48.882081  214550 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:48.932250  214550 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:48.932626  214550 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:48.968464  214550 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:48.968571  214550 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:48.968634  214550 kubeadm.go:319] OS: Linux
	I1123 08:59:48.968710  214550 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:48.968779  214550 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:48.968852  214550 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:48.968949  214550 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:48.969029  214550 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:48.969104  214550 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:48.969191  214550 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:48.969263  214550 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:48.969334  214550 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:49.039395  214550 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:49.039547  214550 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:49.039694  214550 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:49.045139  214550 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:46.061340  216074 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-672503:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.463759827s)
	I1123 08:59:46.061369  216074 kic.go:203] duration metric: took 4.463899193s to extract preloaded images to volume ...
	W1123 08:59:46.061515  216074 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1123 08:59:46.061700  216074 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:59:46.159063  216074 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-672503 --name embed-certs-672503 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-672503 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-672503 --network embed-certs-672503 --ip 192.168.76.2 --volume embed-certs-672503:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:59:46.530738  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Running}}
	I1123 08:59:46.558782  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:46.582800  216074 cli_runner.go:164] Run: docker exec embed-certs-672503 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:59:46.646806  216074 oci.go:144] the created container "embed-certs-672503" has a running status.
	I1123 08:59:46.646847  216074 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa...
	I1123 08:59:46.847783  216074 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:59:46.880288  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:46.917106  216074 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:59:46.917131  216074 kic_runner.go:114] Args: [docker exec --privileged embed-certs-672503 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:59:46.987070  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 08:59:47.019780  216074 machine.go:94] provisionDockerMachine start ...
	I1123 08:59:47.019874  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:47.051570  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:47.051918  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:47.051935  216074 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:59:47.052575  216074 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1123 08:59:50.211545  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-672503
	
	I1123 08:59:50.211595  216074 ubuntu.go:182] provisioning hostname "embed-certs-672503"
	I1123 08:59:50.211673  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:50.237002  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:50.237319  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:50.237337  216074 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-672503 && echo "embed-certs-672503" | sudo tee /etc/hostname
	I1123 08:59:50.436539  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-672503
	
	I1123 08:59:50.436687  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:50.465709  216074 main.go:143] libmachine: Using SSH client type: native
	I1123 08:59:50.466029  216074 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33069 <nil> <nil>}
	I1123 08:59:50.466045  216074 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-672503' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-672503/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-672503' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:59:49.051452  214550 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:49.051585  214550 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:49.051703  214550 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:50.049674  214550 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:50.094855  214550 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:50.781521  214550 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:51.007002  214550 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:51.586516  214550 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:51.587407  214550 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-118762 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:52.294730  214550 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:52.295126  214550 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-118762 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1123 08:59:50.619868  216074 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:59:50.619905  216074 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-2811/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-2811/.minikube}
	I1123 08:59:50.619926  216074 ubuntu.go:190] setting up certificates
	I1123 08:59:50.619937  216074 provision.go:84] configureAuth start
	I1123 08:59:50.620004  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:50.645393  216074 provision.go:143] copyHostCerts
	I1123 08:59:50.645466  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem, removing ...
	I1123 08:59:50.645475  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem
	I1123 08:59:50.645553  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/ca.pem (1082 bytes)
	I1123 08:59:50.645639  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem, removing ...
	I1123 08:59:50.645644  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem
	I1123 08:59:50.645669  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/cert.pem (1123 bytes)
	I1123 08:59:50.645724  216074 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem, removing ...
	I1123 08:59:50.645729  216074 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem
	I1123 08:59:50.645751  216074 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-2811/.minikube/key.pem (1679 bytes)
	I1123 08:59:50.645795  216074 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem org=jenkins.embed-certs-672503 san=[127.0.0.1 192.168.76.2 embed-certs-672503 localhost minikube]
	I1123 08:59:51.127888  216074 provision.go:177] copyRemoteCerts
	I1123 08:59:51.127960  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:59:51.128004  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.153368  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.284623  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1123 08:59:51.314621  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1123 08:59:51.335720  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 08:59:51.355451  216074 provision.go:87] duration metric: took 735.481705ms to configureAuth
	I1123 08:59:51.355533  216074 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:59:51.355763  216074 config.go:182] Loaded profile config "embed-certs-672503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:59:51.355791  216074 machine.go:97] duration metric: took 4.335986452s to provisionDockerMachine
	I1123 08:59:51.355815  216074 client.go:176] duration metric: took 10.471938723s to LocalClient.Create
	I1123 08:59:51.355856  216074 start.go:167] duration metric: took 10.472037333s to libmachine.API.Create "embed-certs-672503"
	I1123 08:59:51.355949  216074 start.go:293] postStartSetup for "embed-certs-672503" (driver="docker")
	I1123 08:59:51.355976  216074 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:59:51.356061  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:59:51.356134  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.375632  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.492356  216074 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:59:51.496551  216074 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:59:51.496580  216074 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:59:51.496592  216074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/addons for local assets ...
	I1123 08:59:51.496645  216074 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-2811/.minikube/files for local assets ...
	I1123 08:59:51.496721  216074 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem -> 46242.pem in /etc/ssl/certs
	I1123 08:59:51.496826  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 08:59:51.505195  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:51.525735  216074 start.go:296] duration metric: took 169.754775ms for postStartSetup
	I1123 08:59:51.526206  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:51.546243  216074 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/config.json ...
	I1123 08:59:51.546511  216074 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:59:51.546553  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.568894  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.680931  216074 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:59:51.686143  216074 start.go:128] duration metric: took 10.806110424s to createHost
	I1123 08:59:51.686171  216074 start.go:83] releasing machines lock for "embed-certs-672503", held for 10.806242996s
	I1123 08:59:51.686257  216074 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-672503
	I1123 08:59:51.705486  216074 ssh_runner.go:195] Run: cat /version.json
	I1123 08:59:51.705573  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.705949  216074 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:59:51.706024  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 08:59:51.760593  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.767588  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 08:59:51.883448  216074 ssh_runner.go:195] Run: systemctl --version
	I1123 08:59:51.991493  216074 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:59:51.996626  216074 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:59:51.996703  216074 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:59:52.044663  216074 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1123 08:59:52.044689  216074 start.go:496] detecting cgroup driver to use...
	I1123 08:59:52.044721  216074 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1123 08:59:52.044781  216074 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1123 08:59:52.061494  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1123 08:59:52.076189  216074 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:59:52.076260  216074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:59:52.094291  216074 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:59:52.114994  216074 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:59:52.292895  216074 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:59:52.481817  216074 docker.go:234] disabling docker service ...
	I1123 08:59:52.481931  216074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:59:52.508317  216074 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:59:52.526364  216074 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:59:52.700213  216074 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:59:52.897094  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:59:52.915331  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:59:52.931211  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1123 08:59:52.946225  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1123 08:59:52.956101  216074 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1123 08:59:52.956226  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1123 08:59:52.965762  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:52.975341  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1123 08:59:52.985192  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1123 08:59:52.994955  216074 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:59:53.010410  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1123 08:59:53.027207  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1123 08:59:53.042077  216074 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1123 08:59:53.054424  216074 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:59:53.063874  216074 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:59:53.072557  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.226737  216074 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1123 08:59:53.443692  216074 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1123 08:59:53.443892  216074 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1123 08:59:53.448833  216074 start.go:564] Will wait 60s for crictl version
	I1123 08:59:53.448947  216074 ssh_runner.go:195] Run: which crictl
	I1123 08:59:53.453157  216074 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:59:53.486128  216074 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1123 08:59:53.486258  216074 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:53.513131  216074 ssh_runner.go:195] Run: containerd --version
	I1123 08:59:53.540090  216074 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1123 08:59:53.543140  216074 cli_runner.go:164] Run: docker network inspect embed-certs-672503 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:59:53.564398  216074 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 08:59:53.569921  216074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:53.584791  216074 kubeadm.go:884] updating cluster {Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:59:53.584953  216074 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:59:53.585060  216074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:53.625666  216074 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:53.625695  216074 containerd.go:534] Images already preloaded, skipping extraction
	I1123 08:59:53.625759  216074 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:59:53.653757  216074 containerd.go:627] all images are preloaded for containerd runtime.
	I1123 08:59:53.653781  216074 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:59:53.653789  216074 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1123 08:59:53.653881  216074 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-672503 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:59:53.653948  216074 ssh_runner.go:195] Run: sudo crictl info
	I1123 08:59:53.696072  216074 cni.go:84] Creating CNI manager for ""
	I1123 08:59:53.696098  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:59:53.696113  216074 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:59:53.696140  216074 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-672503 NodeName:embed-certs-672503 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:59:53.696260  216074 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-672503"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:59:53.696337  216074 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:59:53.705716  216074 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:59:53.705795  216074 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:59:53.718287  216074 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1123 08:59:53.737046  216074 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:59:53.760149  216074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1123 08:59:53.778487  216074 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:59:53.782565  216074 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:59:53.792649  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:59:53.947067  216074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:59:53.969434  216074 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503 for IP: 192.168.76.2
	I1123 08:59:53.969452  216074 certs.go:195] generating shared ca certs ...
	I1123 08:59:53.969468  216074 certs.go:227] acquiring lock for ca certs: {Name:mk62ed57b444cc29d692b7c3030f7d32bd07c4d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:53.969604  216074 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key
	I1123 08:59:53.969644  216074 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key
	I1123 08:59:53.969650  216074 certs.go:257] generating profile certs ...
	I1123 08:59:53.969704  216074 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key
	I1123 08:59:53.969718  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt with IP's: []
	I1123 08:59:54.209900  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt ...
	I1123 08:59:54.209965  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.crt: {Name:mk5c525ca71ddd2fe2c7f6b3ca8599f23905a8c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.210184  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key ...
	I1123 08:59:54.210197  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/client.key: {Name:mk8943be44317db4dff6c1e7eaf6a19a57aa6c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.210284  216074 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae
	I1123 08:59:54.210296  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 08:59:54.801069  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae ...
	I1123 08:59:54.801096  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae: {Name:mk380799870e5ea7b7c67a4d865af58b1de5aef5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.801278  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae ...
	I1123 08:59:54.801290  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae: {Name:mk102df1c6315a508518783bccf3cb2f81c38779 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:54.801364  216074 certs.go:382] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt.87dc76ae -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt
	I1123 08:59:54.801439  216074 certs.go:386] copying /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key.87dc76ae -> /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key
	I1123 08:59:54.801491  216074 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key
	I1123 08:59:54.801507  216074 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt with IP's: []
	I1123 08:59:55.253694  216074 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt ...
	I1123 08:59:55.253767  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt: {Name:mkdf06b6c921783e84858386a11a6aa335d63967 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:55.253999  216074 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key ...
	I1123 08:59:55.254013  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key: {Name:mk979f2bcf5527fe8ab1fb441ce8c10881831a69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:59:55.254199  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem (1338 bytes)
	W1123 08:59:55.254240  216074 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624_empty.pem, impossibly tiny 0 bytes
	I1123 08:59:55.254249  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:59:55.254277  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem (1082 bytes)
	I1123 08:59:55.254303  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:59:55.254368  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/certs/key.pem (1679 bytes)
	I1123 08:59:55.254413  216074 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem (1708 bytes)
	I1123 08:59:55.255001  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:59:55.275757  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1123 08:59:55.301850  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:59:55.327043  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1123 08:59:55.356120  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 08:59:55.379337  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 08:59:55.403251  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:59:55.432903  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/embed-certs-672503/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 08:59:55.452955  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:59:55.477346  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/certs/4624.pem --> /usr/share/ca-certificates/4624.pem (1338 bytes)
	I1123 08:59:55.510351  216074 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/ssl/certs/46242.pem --> /usr/share/ca-certificates/46242.pem (1708 bytes)
	I1123 08:59:55.531366  216074 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:59:55.546185  216074 ssh_runner.go:195] Run: openssl version
	I1123 08:59:55.552895  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/4624.pem && ln -fs /usr/share/ca-certificates/4624.pem /etc/ssl/certs/4624.pem"
	I1123 08:59:55.562322  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.566546  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:18 /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.566661  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4624.pem
	I1123 08:59:55.608819  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/4624.pem /etc/ssl/certs/51391683.0"
	I1123 08:59:55.617792  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/46242.pem && ln -fs /usr/share/ca-certificates/46242.pem /etc/ssl/certs/46242.pem"
	I1123 08:59:55.626621  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.631031  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:18 /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.631147  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/46242.pem
	I1123 08:59:55.673213  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/46242.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 08:59:55.682467  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:59:55.691629  216074 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.696005  216074 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:11 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.696116  216074 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:59:55.737391  216074 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:59:55.746485  216074 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:59:55.750669  216074 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:59:55.750779  216074 kubeadm.go:401] StartCluster: {Name:embed-certs-672503 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-672503 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:59:55.750882  216074 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1123 08:59:55.750971  216074 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:59:55.781886  216074 cri.go:89] found id: ""
	I1123 08:59:55.782008  216074 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:59:55.792128  216074 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:59:55.801015  216074 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:59:55.801120  216074 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:59:55.811498  216074 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:59:55.811567  216074 kubeadm.go:158] found existing configuration files:
	
	I1123 08:59:55.811651  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:59:55.820390  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:59:55.820489  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:59:55.828204  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:59:55.837261  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:59:55.837355  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:59:55.845286  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:59:55.854064  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:59:55.854174  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:59:55.861833  216074 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:59:55.870496  216074 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:59:55.870610  216074 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:59:55.878638  216074 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:59:55.935971  216074 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:59:55.937587  216074 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:59:56.004559  216074 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:59:56.004761  216074 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1123 08:59:56.004834  216074 kubeadm.go:319] OS: Linux
	I1123 08:59:56.004912  216074 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:59:56.004998  216074 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1123 08:59:56.005083  216074 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:59:56.005163  216074 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:59:56.005244  216074 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:59:56.005326  216074 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:59:56.005405  216074 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:59:56.005488  216074 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:59:56.005568  216074 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1123 08:59:56.119904  216074 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:59:56.120070  216074 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:59:56.120207  216074 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:59:56.130630  216074 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:59:54.179851  214550 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:59:55.466764  214550 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:55.672141  214550 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:59:55.672731  214550 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:59:55.836881  214550 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:59:56.018357  214550 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:59:56.361926  214550 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:59:56.873997  214550 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:59:57.413691  214550 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:59:57.414774  214550 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:59:57.417706  214550 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:59:57.421342  214550 out.go:252]   - Booting up control plane ...
	I1123 08:59:57.421437  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:59:57.426176  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:59:57.426253  214550 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:59:57.445605  214550 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:59:57.445714  214550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:59:57.456012  214550 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:59:57.456111  214550 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:59:57.456152  214550 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:59:57.617060  214550 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:59:57.617179  214550 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:59:56.136350  216074 out.go:252]   - Generating certificates and keys ...
	I1123 08:59:56.136541  216074 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:59:56.136667  216074 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:59:57.121922  216074 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:59:57.436901  216074 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:59:57.609063  216074 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:59:58.013484  216074 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:59:58.298959  216074 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:59:58.303729  216074 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-672503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:58.349481  216074 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:59:58.350030  216074 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-672503 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 08:59:59.325836  216074 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:00:00.299809  216074 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:59:59.119693  214550 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500938234s
	I1123 08:59:59.122603  214550 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:59:59.122949  214550 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8444/livez
	I1123 08:59:59.123601  214550 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:59:59.124077  214550 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:00:00.879718  216074 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:00:00.879799  216074 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:00:01.122151  216074 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:00:03.397018  216074 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:00:05.387724  216074 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:00:05.691737  216074 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:00:06.099799  216074 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:00:06.099904  216074 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:00:06.107751  216074 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:00:03.716327  214550 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.591863015s
	I1123 09:00:09.442146  214550 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.317417042s
	I1123 09:00:09.630647  214550 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 10.507233792s
	I1123 09:00:09.661041  214550 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:09.696775  214550 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:09.724658  214550 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:09.725105  214550 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-118762 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:09.789313  214550 kubeadm.go:319] [bootstrap-token] Using token: d97ou5.m8drvm11cz5qqhuf
	I1123 09:00:06.111147  216074 out.go:252]   - Booting up control plane ...
	I1123 09:00:06.111260  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:00:06.111338  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:00:06.111425  216074 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:00:06.141906  216074 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:00:06.142016  216074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:00:06.152623  216074 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:00:06.152727  216074 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:00:06.152767  216074 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:00:06.424623  216074 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:00:06.424743  216074 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:00:07.419394  216074 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001849125s
	I1123 09:00:07.422769  216074 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:00:07.422861  216074 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 09:00:07.423174  216074 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:00:07.423260  216074 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:00:09.792446  214550 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:09.792565  214550 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:09.822919  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:09.841947  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:09.852584  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:09.860084  214550 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:09.867079  214550 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:10.041393  214550 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:10.492226  214550 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:11.049466  214550 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:11.050970  214550 kubeadm.go:319] 
	I1123 09:00:11.051044  214550 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:11.051049  214550 kubeadm.go:319] 
	I1123 09:00:11.051126  214550 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:11.051130  214550 kubeadm.go:319] 
	I1123 09:00:11.051155  214550 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:11.054107  214550 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:11.054173  214550 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:11.054178  214550 kubeadm.go:319] 
	I1123 09:00:11.054232  214550 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:11.054259  214550 kubeadm.go:319] 
	I1123 09:00:11.054308  214550 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:11.054312  214550 kubeadm.go:319] 
	I1123 09:00:11.054364  214550 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:11.054439  214550 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:11.054508  214550 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:11.054514  214550 kubeadm.go:319] 
	I1123 09:00:11.054918  214550 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:11.054999  214550 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:11.055003  214550 kubeadm.go:319] 
	I1123 09:00:11.055310  214550 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token d97ou5.m8drvm11cz5qqhuf \
	I1123 09:00:11.055433  214550 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 09:00:11.055653  214550 kubeadm.go:319] 	--control-plane 
	I1123 09:00:11.055662  214550 kubeadm.go:319] 
	I1123 09:00:11.056081  214550 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:11.056091  214550 kubeadm.go:319] 
	I1123 09:00:11.056374  214550 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token d97ou5.m8drvm11cz5qqhuf \
	I1123 09:00:11.056668  214550 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 09:00:11.065038  214550 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:11.065464  214550 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:11.065590  214550 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:11.065601  214550 cni.go:84] Creating CNI manager for ""
	I1123 09:00:11.065609  214550 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:00:11.068935  214550 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:11.071817  214550 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:11.083987  214550 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:11.084065  214550 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:11.157462  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:00:11.877723  214550 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:00:11.877851  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:11.877919  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-118762 minikube.k8s.io/updated_at=2025_11_23T09_00_11_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=default-k8s-diff-port-118762 minikube.k8s.io/primary=true
	I1123 09:00:12.400645  214550 ops.go:34] apiserver oom_adj: -16
	I1123 09:00:12.400749  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.479703  216074 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 7.056359214s
	I1123 09:00:12.901058  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:13.400921  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:13.901348  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.400890  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:14.901622  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:15.401708  214550 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:15.797055  214550 kubeadm.go:1114] duration metric: took 3.919248598s to wait for elevateKubeSystemPrivileges
	I1123 09:00:15.797081  214550 kubeadm.go:403] duration metric: took 27.027055323s to StartCluster
	I1123 09:00:15.797098  214550 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:15.797159  214550 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:00:15.797780  214550 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:15.797984  214550 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:00:15.798066  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:00:15.798303  214550 config.go:182] Loaded profile config "default-k8s-diff-port-118762": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:00:15.798340  214550 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:00:15.798395  214550 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-118762"
	I1123 09:00:15.798414  214550 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-118762"
	I1123 09:00:15.798437  214550 host.go:66] Checking if "default-k8s-diff-port-118762" exists ...
	I1123 09:00:15.798912  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.799494  214550 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-118762"
	I1123 09:00:15.799518  214550 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-118762"
	I1123 09:00:15.799812  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.802617  214550 out.go:179] * Verifying Kubernetes components...
	I1123 09:00:15.805826  214550 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:15.840681  214550 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-118762"
	I1123 09:00:15.840730  214550 host.go:66] Checking if "default-k8s-diff-port-118762" exists ...
	I1123 09:00:15.841178  214550 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-118762 --format={{.State.Status}}
	I1123 09:00:15.841365  214550 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:00:15.845719  214550 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:15.845739  214550 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:00:15.845799  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 09:00:15.885107  214550 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:15.885129  214550 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:00:15.885196  214550 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-118762
	I1123 09:00:15.885424  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 09:00:15.922980  214550 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33064 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/default-k8s-diff-port-118762/id_rsa Username:docker}
	I1123 09:00:16.516094  214550 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:00:16.516301  214550 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:16.565568  214550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:16.660294  214550 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:17.770086  214550 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.253733356s)
	I1123 09:00:17.770803  214550 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-118762" to be "Ready" ...
	I1123 09:00:17.771113  214550 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.254946263s)
	I1123 09:00:17.771140  214550 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1123 09:00:18.288784  214550 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-118762" context rescaled to 1 replicas
	I1123 09:00:18.294378  214550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.634044217s)
	I1123 09:00:18.294508  214550 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.728864491s)
	I1123 09:00:18.313019  214550 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 09:00:18.174934  216074 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 10.752142419s
	I1123 09:00:18.924553  216074 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 11.501560337s
	I1123 09:00:18.944911  216074 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:00:18.969340  216074 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:00:18.982694  216074 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:00:18.982935  216074 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-672503 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:00:18.996135  216074 kubeadm.go:319] [bootstrap-token] Using token: n9250s.xdwmypsz1r225um6
	I1123 09:00:18.999202  216074 out.go:252]   - Configuring RBAC rules ...
	I1123 09:00:18.999323  216074 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:00:19.010682  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:00:19.023889  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:00:19.027010  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:00:19.034948  216074 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:00:19.039786  216074 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:00:19.331973  216074 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:00:19.770619  216074 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:00:20.331084  216074 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:00:20.332385  216074 kubeadm.go:319] 
	I1123 09:00:20.332460  216074 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:00:20.332472  216074 kubeadm.go:319] 
	I1123 09:00:20.332550  216074 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:00:20.332554  216074 kubeadm.go:319] 
	I1123 09:00:20.332585  216074 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:00:20.332649  216074 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:00:20.332706  216074 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:00:20.332714  216074 kubeadm.go:319] 
	I1123 09:00:20.332768  216074 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:00:20.332775  216074 kubeadm.go:319] 
	I1123 09:00:20.332826  216074 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:00:20.332834  216074 kubeadm.go:319] 
	I1123 09:00:20.332886  216074 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:00:20.332964  216074 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:00:20.333036  216074 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:00:20.333044  216074 kubeadm.go:319] 
	I1123 09:00:20.333141  216074 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:00:20.333222  216074 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:00:20.333230  216074 kubeadm.go:319] 
	I1123 09:00:20.333314  216074 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token n9250s.xdwmypsz1r225um6 \
	I1123 09:00:20.333421  216074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c \
	I1123 09:00:20.333454  216074 kubeadm.go:319] 	--control-plane 
	I1123 09:00:20.333461  216074 kubeadm.go:319] 
	I1123 09:00:20.333554  216074 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:00:20.333574  216074 kubeadm.go:319] 
	I1123 09:00:20.333657  216074 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token n9250s.xdwmypsz1r225um6 \
	I1123 09:00:20.333764  216074 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:71362168017b15d852da766f954285ee416e72f8318701a676b1ab1d2fcceb6c 
	I1123 09:00:20.339187  216074 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1123 09:00:20.339460  216074 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1123 09:00:20.339572  216074 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:00:20.339594  216074 cni.go:84] Creating CNI manager for ""
	I1123 09:00:20.339606  216074 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:00:20.342914  216074 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:00:20.345744  216074 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:00:20.350352  216074 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:00:20.350371  216074 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:00:20.365062  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:00:18.315850  214550 addons.go:530] duration metric: took 2.517504837s for enable addons: enabled=[storage-provisioner default-storageclass]
	W1123 09:00:19.773873  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:21.774051  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:20.682862  216074 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:00:20.683008  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:20.683107  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-672503 minikube.k8s.io/updated_at=2025_11_23T09_00_20_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=embed-certs-672503 minikube.k8s.io/primary=true
	I1123 09:00:20.861424  216074 ops.go:34] apiserver oom_adj: -16
	I1123 09:00:20.881440  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:21.382484  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:21.881564  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:22.381797  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:22.881698  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:23.382044  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:23.881478  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:24.381553  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:24.882135  216074 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:00:25.085445  216074 kubeadm.go:1114] duration metric: took 4.402483472s to wait for elevateKubeSystemPrivileges
	I1123 09:00:25.085479  216074 kubeadm.go:403] duration metric: took 29.334704925s to StartCluster
	I1123 09:00:25.085499  216074 settings.go:142] acquiring lock: {Name:mkd0156f6f98ed352de83fb5c4c92474ddea9220 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:25.085586  216074 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:00:25.087626  216074 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/kubeconfig: {Name:mk75cb4a9442799c344ac747e18ea4edd6e23c43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:00:25.087936  216074 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:00:25.088691  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:00:25.089017  216074 config.go:182] Loaded profile config "embed-certs-672503": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:00:25.089061  216074 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:00:25.089133  216074 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-672503"
	I1123 09:00:25.089153  216074 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-672503"
	I1123 09:00:25.089179  216074 host.go:66] Checking if "embed-certs-672503" exists ...
	I1123 09:00:25.089653  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.090352  216074 addons.go:70] Setting default-storageclass=true in profile "embed-certs-672503"
	I1123 09:00:25.090381  216074 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-672503"
	I1123 09:00:25.090715  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.093412  216074 out.go:179] * Verifying Kubernetes components...
	I1123 09:00:25.100650  216074 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:00:25.132922  216074 addons.go:239] Setting addon default-storageclass=true in "embed-certs-672503"
	I1123 09:00:25.132970  216074 host.go:66] Checking if "embed-certs-672503" exists ...
	I1123 09:00:25.133464  216074 cli_runner.go:164] Run: docker container inspect embed-certs-672503 --format={{.State.Status}}
	I1123 09:00:25.134451  216074 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:00:25.137634  216074 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:25.137660  216074 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:00:25.137734  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 09:00:25.175531  216074 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:25.175555  216074 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:00:25.175631  216074 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-672503
	I1123 09:00:25.190357  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 09:00:25.214325  216074 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33069 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/embed-certs-672503/id_rsa Username:docker}
	I1123 09:00:25.395679  216074 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:00:25.445659  216074 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:00:25.568912  216074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:00:25.606764  216074 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:00:26.047827  216074 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 09:00:26.050542  216074 node_ready.go:35] waiting up to 6m0s for node "embed-certs-672503" to be "Ready" ...
	I1123 09:00:26.465272  216074 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	W1123 09:00:23.774226  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:26.274269  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:26.468271  216074 addons.go:530] duration metric: took 1.379204566s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1123 09:00:26.552103  216074 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-672503" context rescaled to 1 replicas
	W1123 09:00:28.054477  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:30.054656  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:28.774465  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:30.774882  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:32.553443  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:35.054660  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:33.274428  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:35.774260  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:37.554121  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:40.055622  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:38.273771  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:40.773644  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:42.553668  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:44.553840  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:43.273604  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:45.275951  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:47.773735  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:47.054612  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:49.553846  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:49.774526  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:52.273699  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:51.554200  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:54.053723  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:54.274489  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	W1123 09:00:56.773822  214550 node_ready.go:57] node "default-k8s-diff-port-118762" has "Ready":"False" status (will retry)
	I1123 09:00:57.776587  214550 node_ready.go:49] node "default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:57.776614  214550 node_ready.go:38] duration metric: took 40.005787911s for node "default-k8s-diff-port-118762" to be "Ready" ...
	I1123 09:00:57.776628  214550 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:00:57.776688  214550 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:00:57.792566  214550 api_server.go:72] duration metric: took 41.994554549s to wait for apiserver process to appear ...
	I1123 09:00:57.792589  214550 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:00:57.792608  214550 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1123 09:00:57.801332  214550 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1123 09:00:57.802591  214550 api_server.go:141] control plane version: v1.34.1
	I1123 09:00:57.802671  214550 api_server.go:131] duration metric: took 10.074405ms to wait for apiserver health ...
	I1123 09:00:57.802696  214550 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:00:57.806165  214550 system_pods.go:59] 8 kube-system pods found
	I1123 09:00:57.806249  214550 system_pods.go:61] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:57.806272  214550 system_pods.go:61] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:57.806312  214550 system_pods.go:61] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:57.806336  214550 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:57.806359  214550 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:57.806397  214550 system_pods.go:61] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:57.806420  214550 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:57.806446  214550 system_pods.go:61] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:57.806485  214550 system_pods.go:74] duration metric: took 3.749386ms to wait for pod list to return data ...
	I1123 09:00:57.806513  214550 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:00:57.809265  214550 default_sa.go:45] found service account: "default"
	I1123 09:00:57.809285  214550 default_sa.go:55] duration metric: took 2.751519ms for default service account to be created ...
	I1123 09:00:57.809298  214550 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:00:57.811926  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:57.811955  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:57.811962  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:57.811968  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:57.811972  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:57.811977  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:57.811980  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:57.811984  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:57.811991  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:57.812009  214550 retry.go:31] will retry after 274.029839ms: missing components: kube-dns
	I1123 09:00:58.095441  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:58.095474  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:00:58.095481  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:58.095487  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:58.095491  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:58.095497  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:58.095502  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:58.095506  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:58.095511  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:00:58.095526  214550 retry.go:31] will retry after 259.858354ms: missing components: kube-dns
	I1123 09:00:58.359494  214550 system_pods.go:86] 8 kube-system pods found
	I1123 09:00:58.359527  214550 system_pods.go:89] "coredns-66bc5c9577-r5snd" [cacf6afe-5fee-4f94-8eb9-c7c24526cf27] Running
	I1123 09:00:58.359536  214550 system_pods.go:89] "etcd-default-k8s-diff-port-118762" [217a8917-5e05-443f-b89d-520804178689] Running
	I1123 09:00:58.359542  214550 system_pods.go:89] "kindnet-6vk7l" [110880c9-bd5d-4589-b067-2b1f1168fa0c] Running
	I1123 09:00:58.359546  214550 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-118762" [ac8bec49-6148-4f8d-ac4d-6514576a22d7] Running
	I1123 09:00:58.359551  214550 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-118762" [aaca2928-3c47-4e94-afef-ba7d1abfcc9f] Running
	I1123 09:00:58.359556  214550 system_pods.go:89] "kube-proxy-fwc9v" [d4b1b360-1ad9-4d21-bf09-34d8328640f7] Running
	I1123 09:00:58.359560  214550 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-118762" [4d939129-4e7b-4e4e-aa53-bccfcfec49b6] Running
	I1123 09:00:58.359564  214550 system_pods.go:89] "storage-provisioner" [d0fab715-c08e-4a99-a6ba-4b4837f47aaf] Running
	I1123 09:00:58.359572  214550 system_pods.go:126] duration metric: took 550.268629ms to wait for k8s-apps to be running ...
	I1123 09:00:58.359583  214550 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:00:58.359641  214550 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:00:58.373607  214550 system_svc.go:56] duration metric: took 14.015669ms WaitForService to wait for kubelet
	I1123 09:00:58.373638  214550 kubeadm.go:587] duration metric: took 42.575629379s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:00:58.373657  214550 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:00:58.376361  214550 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:00:58.376394  214550 node_conditions.go:123] node cpu capacity is 2
	I1123 09:00:58.376408  214550 node_conditions.go:105] duration metric: took 2.746055ms to run NodePressure ...
	I1123 09:00:58.376419  214550 start.go:242] waiting for startup goroutines ...
	I1123 09:00:58.376427  214550 start.go:247] waiting for cluster config update ...
	I1123 09:00:58.376438  214550 start.go:256] writing updated cluster config ...
	I1123 09:00:58.376721  214550 ssh_runner.go:195] Run: rm -f paused
	I1123 09:00:58.380292  214550 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:00:58.385153  214550 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-r5snd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.390028  214550 pod_ready.go:94] pod "coredns-66bc5c9577-r5snd" is "Ready"
	I1123 09:00:58.390067  214550 pod_ready.go:86] duration metric: took 4.884639ms for pod "coredns-66bc5c9577-r5snd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.392315  214550 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.396380  214550 pod_ready.go:94] pod "etcd-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.396450  214550 pod_ready.go:86] duration metric: took 4.109265ms for pod "etcd-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.398716  214550 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.403219  214550 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.403254  214550 pod_ready.go:86] duration metric: took 4.51516ms for pod "kube-apiserver-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.405723  214550 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.785140  214550 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:58.785167  214550 pod_ready.go:86] duration metric: took 379.369705ms for pod "kube-controller-manager-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:58.985264  214550 pod_ready.go:83] waiting for pod "kube-proxy-fwc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.387683  214550 pod_ready.go:94] pod "kube-proxy-fwc9v" is "Ready"
	I1123 09:00:59.387712  214550 pod_ready.go:86] duration metric: took 402.417123ms for pod "kube-proxy-fwc9v" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.588360  214550 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.985884  214550 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-118762" is "Ready"
	I1123 09:00:59.985910  214550 pod_ready.go:86] duration metric: took 397.484705ms for pod "kube-scheduler-default-k8s-diff-port-118762" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:00:59.985924  214550 pod_ready.go:40] duration metric: took 1.605599928s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:00.360876  214550 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:00.365235  214550 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-118762" cluster and "default" namespace by default
	W1123 09:00:56.054171  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:00:58.059777  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:00.201612  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:02.554079  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	W1123 09:01:05.054145  216074 node_ready.go:57] node "embed-certs-672503" has "Ready":"False" status (will retry)
	I1123 09:01:06.553619  216074 node_ready.go:49] node "embed-certs-672503" is "Ready"
	I1123 09:01:06.553653  216074 node_ready.go:38] duration metric: took 40.503031578s for node "embed-certs-672503" to be "Ready" ...
	I1123 09:01:06.553667  216074 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:01:06.553728  216074 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:01:06.566313  216074 api_server.go:72] duration metric: took 41.478343311s to wait for apiserver process to appear ...
	I1123 09:01:06.566341  216074 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:01:06.566374  216074 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:01:06.574435  216074 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:01:06.575998  216074 api_server.go:141] control plane version: v1.34.1
	I1123 09:01:06.576024  216074 api_server.go:131] duration metric: took 9.676749ms to wait for apiserver health ...
	I1123 09:01:06.576034  216074 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:01:06.579331  216074 system_pods.go:59] 8 kube-system pods found
	I1123 09:01:06.579491  216074 system_pods.go:61] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.579500  216074 system_pods.go:61] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.579506  216074 system_pods.go:61] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.579511  216074 system_pods.go:61] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.579516  216074 system_pods.go:61] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.579524  216074 system_pods.go:61] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.579529  216074 system_pods.go:61] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.579541  216074 system_pods.go:61] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.579548  216074 system_pods.go:74] duration metric: took 3.508309ms to wait for pod list to return data ...
	I1123 09:01:06.579562  216074 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:01:06.584140  216074 default_sa.go:45] found service account: "default"
	I1123 09:01:06.584219  216074 default_sa.go:55] duration metric: took 4.649963ms for default service account to be created ...
	I1123 09:01:06.584244  216074 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:01:06.587869  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:06.587906  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.587913  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.587919  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.587923  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.587929  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.587933  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.587938  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.587945  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.587968  216074 retry.go:31] will retry after 247.424175ms: missing components: kube-dns
	I1123 09:01:06.841170  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:06.841208  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:06.841215  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:06.841222  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:06.841227  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:06.841232  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:06.841237  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:06.841241  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:06.841246  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:01:06.841262  216074 retry.go:31] will retry after 283.378756ms: missing components: kube-dns
	I1123 09:01:07.129581  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.129666  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.129688  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.129732  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.129759  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.129784  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.129819  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.129847  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.129870  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.129915  216074 retry.go:31] will retry after 365.111173ms: missing components: kube-dns
	I1123 09:01:07.499321  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.499446  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.499463  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.499471  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.499475  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.499500  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.499508  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.499546  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.499559  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.499580  216074 retry.go:31] will retry after 378.113017ms: missing components: kube-dns
	I1123 09:01:07.881489  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:07.881535  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:01:07.881542  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:07.881549  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:07.881554  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:07.881559  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:07.881562  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:07.881566  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:07.881570  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:07.881588  216074 retry.go:31] will retry after 690.773315ms: missing components: kube-dns
	I1123 09:01:08.576591  216074 system_pods.go:86] 8 kube-system pods found
	I1123 09:01:08.576623  216074 system_pods.go:89] "coredns-66bc5c9577-nhnbc" [47a7c798-9292-4915-96ab-78980671decb] Running
	I1123 09:01:08.576630  216074 system_pods.go:89] "etcd-embed-certs-672503" [12f4545c-1575-4756-a6b1-904a1a705a0c] Running
	I1123 09:01:08.576635  216074 system_pods.go:89] "kindnet-crv85" [ee0e8846-0f87-4847-a24a-d55ed9cf2c0d] Running
	I1123 09:01:08.576657  216074 system_pods.go:89] "kube-apiserver-embed-certs-672503" [accd77b1-6a29-490f-aa0e-7ec496d73c92] Running
	I1123 09:01:08.576662  216074 system_pods.go:89] "kube-controller-manager-embed-certs-672503" [9a22cfc3-b5d2-47ae-8602-2f8723d778c8] Running
	I1123 09:01:08.576666  216074 system_pods.go:89] "kube-proxy-wbnjd" [9ad92875-26b3-43b9-8680-17253a8d35d2] Running
	I1123 09:01:08.576671  216074 system_pods.go:89] "kube-scheduler-embed-certs-672503" [8e2308f3-c8cd-4e87-8099-1b815e018cbb] Running
	I1123 09:01:08.576676  216074 system_pods.go:89] "storage-provisioner" [47f41b96-311e-4020-87db-b84c42d71ba8] Running
	I1123 09:01:08.576687  216074 system_pods.go:126] duration metric: took 1.992424101s to wait for k8s-apps to be running ...
	I1123 09:01:08.576700  216074 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:01:08.576756  216074 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:01:08.591468  216074 system_svc.go:56] duration metric: took 14.759167ms WaitForService to wait for kubelet
	I1123 09:01:08.591497  216074 kubeadm.go:587] duration metric: took 43.503532438s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:01:08.591516  216074 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:01:08.594570  216074 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1123 09:01:08.594606  216074 node_conditions.go:123] node cpu capacity is 2
	I1123 09:01:08.594621  216074 node_conditions.go:105] duration metric: took 3.099272ms to run NodePressure ...
	I1123 09:01:08.594634  216074 start.go:242] waiting for startup goroutines ...
	I1123 09:01:08.594642  216074 start.go:247] waiting for cluster config update ...
	I1123 09:01:08.594654  216074 start.go:256] writing updated cluster config ...
	I1123 09:01:08.594942  216074 ssh_runner.go:195] Run: rm -f paused
	I1123 09:01:08.598542  216074 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:08.602701  216074 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-nhnbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.608070  216074 pod_ready.go:94] pod "coredns-66bc5c9577-nhnbc" is "Ready"
	I1123 09:01:08.608097  216074 pod_ready.go:86] duration metric: took 5.358349ms for pod "coredns-66bc5c9577-nhnbc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.610514  216074 pod_ready.go:83] waiting for pod "etcd-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.615555  216074 pod_ready.go:94] pod "etcd-embed-certs-672503" is "Ready"
	I1123 09:01:08.615582  216074 pod_ready.go:86] duration metric: took 5.042688ms for pod "etcd-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.618015  216074 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.624626  216074 pod_ready.go:94] pod "kube-apiserver-embed-certs-672503" is "Ready"
	I1123 09:01:08.624654  216074 pod_ready.go:86] duration metric: took 6.607794ms for pod "kube-apiserver-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:08.632607  216074 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.003276  216074 pod_ready.go:94] pod "kube-controller-manager-embed-certs-672503" is "Ready"
	I1123 09:01:09.003305  216074 pod_ready.go:86] duration metric: took 370.669957ms for pod "kube-controller-manager-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.204229  216074 pod_ready.go:83] waiting for pod "kube-proxy-wbnjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.603471  216074 pod_ready.go:94] pod "kube-proxy-wbnjd" is "Ready"
	I1123 09:01:09.603500  216074 pod_ready.go:86] duration metric: took 399.242725ms for pod "kube-proxy-wbnjd" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:09.802674  216074 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:10.203777  216074 pod_ready.go:94] pod "kube-scheduler-embed-certs-672503" is "Ready"
	I1123 09:01:10.203816  216074 pod_ready.go:86] duration metric: took 401.074978ms for pod "kube-scheduler-embed-certs-672503" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:01:10.203830  216074 pod_ready.go:40] duration metric: took 1.605254448s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:01:10.258134  216074 start.go:625] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1123 09:01:10.261593  216074 out.go:179] * Done! kubectl is now configured to use "embed-certs-672503" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	a3e2432a727b8       1611cd07b61d5       10 seconds ago       Running             busybox                   0                   e9e2afc03331d       busybox                                      default
	08e6a055c156c       138784d87c9c5       16 seconds ago       Running             coredns                   0                   0efa056b5977b       coredns-66bc5c9577-nhnbc                     kube-system
	ce730c79fdfcd       ba04bb24b9575       16 seconds ago       Running             storage-provisioner       0                   16618d3617fc6       storage-provisioner                          kube-system
	a022c95c6ebf7       05baa95f5142d       57 seconds ago       Running             kube-proxy                0                   bfd4c60efc25c       kube-proxy-wbnjd                             kube-system
	e2138f60728ce       b1a8c6f707935       58 seconds ago       Running             kindnet-cni               0                   bd22ae2b49f13       kindnet-crv85                                kube-system
	48228be1d3006       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   fd7dcc8602f94       kube-scheduler-embed-certs-672503            kube-system
	b631bc0f28a0e       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   42ca9f105eec7       kube-controller-manager-embed-certs-672503   kube-system
	6935bf91c2b5a       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   ecb38543bb5c1       kube-apiserver-embed-certs-672503            kube-system
	2e1658439e000       a1894772a478e       About a minute ago   Running             etcd                      0                   6c1565868f1b0       etcd-embed-certs-672503                      kube-system
	
	
	==> containerd <==
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.815212530Z" level=info msg="CreateContainer within sandbox \"16618d3617fc629dd2352928e691cbaa9fd1bc5b3bd90d3d653de341bcc6da8c\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f\""
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.816441922Z" level=info msg="StartContainer for \"ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f\""
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.818027263Z" level=info msg="connecting to shim ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f" address="unix:///run/containerd/s/625114e08a76d737f0d90db6f646eacf896fbbc0972839725c46af7a526025c1" protocol=ttrpc version=3
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.827248574Z" level=info msg="Container 08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.838431894Z" level=info msg="CreateContainer within sandbox \"0efa056b5977b2dff1b7d3d96f8f33f9675eb74d4e8448a776071d2258e3b7cc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a\""
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.844803306Z" level=info msg="StartContainer for \"08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a\""
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.848065353Z" level=info msg="connecting to shim 08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a" address="unix:///run/containerd/s/876e08f4bcdec2da65ded68501c79fc31841999ab5a293a8b5144b4ad6668604" protocol=ttrpc version=3
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.945320893Z" level=info msg="StartContainer for \"ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f\" returns successfully"
	Nov 23 09:01:06 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:06.961785491Z" level=info msg="StartContainer for \"08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a\" returns successfully"
	Nov 23 09:01:10 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:10.814686791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b98565e7-4d04-4d9a-b95e-186c353129dc,Namespace:default,Attempt:0,}"
	Nov 23 09:01:10 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:10.899816738Z" level=info msg="connecting to shim e9e2afc03331d6e4e3d71be190c54611a94dda353e7080b864a9b5480bc638d0" address="unix:///run/containerd/s/c5b23ad017fbe6e680412e4829b91861d00735239e2b9406dc4919aaca456cb8" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:01:10 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:10.988261154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:b98565e7-4d04-4d9a-b95e-186c353129dc,Namespace:default,Attempt:0,} returns sandbox id \"e9e2afc03331d6e4e3d71be190c54611a94dda353e7080b864a9b5480bc638d0\""
	Nov 23 09:01:10 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:10.990988666Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.278729136Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.280723317Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937186"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.283476913Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.287585473Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.288400716Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.297364853s"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.288448093Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.298691131Z" level=info msg="CreateContainer within sandbox \"e9e2afc03331d6e4e3d71be190c54611a94dda353e7080b864a9b5480bc638d0\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.321270229Z" level=info msg="Container a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.331726069Z" level=info msg="CreateContainer within sandbox \"e9e2afc03331d6e4e3d71be190c54611a94dda353e7080b864a9b5480bc638d0\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d\""
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.334224771Z" level=info msg="StartContainer for \"a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d\""
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.335767773Z" level=info msg="connecting to shim a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d" address="unix:///run/containerd/s/c5b23ad017fbe6e680412e4829b91861d00735239e2b9406dc4919aaca456cb8" protocol=ttrpc version=3
	Nov 23 09:01:13 embed-certs-672503 containerd[761]: time="2025-11-23T09:01:13.435533261Z" level=info msg="StartContainer for \"a3e2432a727b8b6416282c0432a3b637ad0a87582516bc687eff1dbbe8f6fd0d\" returns successfully"
	
	
	==> coredns [08e6a055c156cf276001a4ea8cce7bbd6a6e89643ccedc348169be6f1f678a8a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53163 - 48004 "HINFO IN 4256419541080546424.6439688394332634916. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034440699s
	
	
	==> describe nodes <==
	Name:               embed-certs-672503
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-672503
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-672503
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_00_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:00:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-672503
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:01:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:01:21 +0000   Sun, 23 Nov 2025 09:00:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:01:21 +0000   Sun, 23 Nov 2025 09:00:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:01:21 +0000   Sun, 23 Nov 2025 09:00:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:01:21 +0000   Sun, 23 Nov 2025 09:01:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-672503
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                fc675532-bd47-4c37-8a40-91e311d7dcb4
	  Boot ID:                    86d8501c-1df5-4d7e-90cb-d9ad951202c5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-nhnbc                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     59s
	  kube-system                 etcd-embed-certs-672503                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-crv85                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      59s
	  kube-system                 kube-apiserver-embed-certs-672503             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-embed-certs-672503    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-wbnjd                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-scheduler-embed-certs-672503             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 57s                kube-proxy       
	  Normal   NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 76s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  76s (x8 over 76s)  kubelet          Node embed-certs-672503 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    76s (x8 over 76s)  kubelet          Node embed-certs-672503 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node embed-certs-672503 status is now: NodeHasSufficientPID
	  Normal   Starting                 76s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 64s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  64s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s                kubelet          Node embed-certs-672503 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s                kubelet          Node embed-certs-672503 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s                kubelet          Node embed-certs-672503 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s                node-controller  Node embed-certs-672503 event: Registered Node embed-certs-672503 in Controller
	  Normal   NodeReady                17s                kubelet          Node embed-certs-672503 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014670] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505841] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033008] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738583] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.057424] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:26] hrtimer: interrupt took 58442338 ns
	
	
	==> etcd [2e1658439e00054d4c123a0704ae2372f64da746c298df15a9d59f81c23e7dcc] <==
	{"level":"warn","ts":"2025-11-23T09:00:13.506766Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.575570Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.619275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.655545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.671823Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.698869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.722370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.753385Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.786144Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.803583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.834613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.938415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.939626Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:13.989516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.041149Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.072867Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.090793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59754","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.108934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.138745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.164090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.182525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.219717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.248176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.333400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:00:14.475947Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59890","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:01:24 up  1:43,  0 user,  load average: 2.66, 3.44, 2.95
	Linux embed-certs-672503 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e2138f60728ce59a3b0b07284a407bcd8d065696f74d0e865f40f4d1b3de6a8a] <==
	I1123 09:00:25.929202       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:00:25.929538       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:00:25.929650       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:00:25.929662       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:00:25.929676       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:00:26Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:00:26.220364       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:00:26.220463       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:00:26.220542       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:00:26.221165       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:00:56.221514       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 09:00:56.221529       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:00:56.221654       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1123 09:00:56.222831       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1123 09:00:57.722105       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:00:57.722183       1 metrics.go:72] Registering metrics
	I1123 09:00:57.722269       1 controller.go:711] "Syncing nftables rules"
	I1123 09:01:06.220792       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:01:06.220854       1 main.go:301] handling current node
	I1123 09:01:16.220035       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:01:16.220090       1 main.go:301] handling current node
	
	
	==> kube-apiserver [6935bf91c2b5a9bd9a3a879a561a5fe8b7706d73efed8292fb9a15b2b1fb8bd9] <==
	I1123 09:00:16.112571       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:00:16.112892       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:00:16.114026       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:00:16.120153       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:00:16.124642       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:00:16.140135       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:16.172282       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:00:16.648341       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:00:16.688557       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:00:16.688586       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:00:18.362210       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:00:18.471094       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:00:18.640445       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:00:18.648637       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 09:00:18.649929       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:00:18.655796       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:00:18.757483       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:00:19.744392       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:00:19.769130       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:00:19.786013       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:00:24.560804       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:24.566099       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:00:24.724871       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:00:24.858206       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:01:20.648114       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:58014: use of closed network connection
	
	
	==> kube-controller-manager [b631bc0f28a0e89a9a6d9e7776f78ca994bfb0ae27f75a8fd29f7e8d18f46472] <==
	I1123 09:00:23.755767       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:00:23.755875       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:00:23.756107       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:00:23.757396       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:00:23.757501       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:00:23.757542       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:00:23.762191       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:23.765833       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:00:23.765993       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:00:23.766081       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:23.766145       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:00:23.766154       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:00:23.777111       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:00:23.789822       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:00:23.796131       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:00:23.801324       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:00:23.802690       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:00:23.802904       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:00:23.803049       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:00:23.803330       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:00:23.804767       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:00:23.805132       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:00:23.805297       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:00:23.812393       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 09:01:08.759180       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a022c95c6ebf7ec165890e7afb9f737a74e7d629a3e09999147b89095bfe6217] <==
	I1123 09:00:26.194401       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:00:26.297289       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:00:26.398138       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:00:26.398174       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:00:26.398268       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:00:26.449287       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:00:26.449344       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:00:26.460756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:00:26.461111       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:00:26.461126       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:26.468828       1 config.go:200] "Starting service config controller"
	I1123 09:00:26.469683       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:00:26.469742       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:00:26.471028       1 config.go:309] "Starting node config controller"
	I1123 09:00:26.471054       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:00:26.471061       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:00:26.469682       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:00:26.469653       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:00:26.471585       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:00:26.570181       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:00:26.572439       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:00:26.572440       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [48228be1d30060a20acf9df4afdfb84a0d717b1726f690802f7837d337f1f24b] <==
	I1123 09:00:13.497947       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:00:18.006941       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:00:18.006989       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:00:18.007001       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:00:18.007012       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:00:18.097461       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:00:18.097503       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:00:18.100896       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:00:18.100993       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:18.101017       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:00:18.101037       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1123 09:00:18.167602       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 09:00:19.201750       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:00:20 embed-certs-672503 kubelet[1478]: I1123 09:00:20.862317    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-672503" podStartSLOduration=0.862301647 podStartE2EDuration="862.301647ms" podCreationTimestamp="2025-11-23 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:20.861796937 +0000 UTC m=+1.273038194" watchObservedRunningTime="2025-11-23 09:00:20.862301647 +0000 UTC m=+1.273542886"
	Nov 23 09:00:20 embed-certs-672503 kubelet[1478]: I1123 09:00:20.895037    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-672503" podStartSLOduration=0.895017588 podStartE2EDuration="895.017588ms" podCreationTimestamp="2025-11-23 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:20.878793489 +0000 UTC m=+1.290034737" watchObservedRunningTime="2025-11-23 09:00:20.895017588 +0000 UTC m=+1.306258828"
	Nov 23 09:00:20 embed-certs-672503 kubelet[1478]: I1123 09:00:20.925264    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-672503" podStartSLOduration=0.925244478 podStartE2EDuration="925.244478ms" podCreationTimestamp="2025-11-23 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:20.895636686 +0000 UTC m=+1.306877926" watchObservedRunningTime="2025-11-23 09:00:20.925244478 +0000 UTC m=+1.336485726"
	Nov 23 09:00:21 embed-certs-672503 kubelet[1478]: I1123 09:00:21.351814    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-672503" podStartSLOduration=1.35179488 podStartE2EDuration="1.35179488s" podCreationTimestamp="2025-11-23 09:00:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:20.926301864 +0000 UTC m=+1.337543104" watchObservedRunningTime="2025-11-23 09:00:21.35179488 +0000 UTC m=+1.763036128"
	Nov 23 09:00:23 embed-certs-672503 kubelet[1478]: I1123 09:00:23.782299    1478 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:00:23 embed-certs-672503 kubelet[1478]: I1123 09:00:23.783079    1478 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066648    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ee0e8846-0f87-4847-a24a-d55ed9cf2c0d-xtables-lock\") pod \"kindnet-crv85\" (UID: \"ee0e8846-0f87-4847-a24a-d55ed9cf2c0d\") " pod="kube-system/kindnet-crv85"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066804    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9ad92875-26b3-43b9-8680-17253a8d35d2-kube-proxy\") pod \"kube-proxy-wbnjd\" (UID: \"9ad92875-26b3-43b9-8680-17253a8d35d2\") " pod="kube-system/kube-proxy-wbnjd"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066831    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9ad92875-26b3-43b9-8680-17253a8d35d2-xtables-lock\") pod \"kube-proxy-wbnjd\" (UID: \"9ad92875-26b3-43b9-8680-17253a8d35d2\") " pod="kube-system/kube-proxy-wbnjd"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066896    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/ee0e8846-0f87-4847-a24a-d55ed9cf2c0d-cni-cfg\") pod \"kindnet-crv85\" (UID: \"ee0e8846-0f87-4847-a24a-d55ed9cf2c0d\") " pod="kube-system/kindnet-crv85"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.066961    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jpmf\" (UniqueName: \"kubernetes.io/projected/9ad92875-26b3-43b9-8680-17253a8d35d2-kube-api-access-6jpmf\") pod \"kube-proxy-wbnjd\" (UID: \"9ad92875-26b3-43b9-8680-17253a8d35d2\") " pod="kube-system/kube-proxy-wbnjd"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.067024    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvl6w\" (UniqueName: \"kubernetes.io/projected/ee0e8846-0f87-4847-a24a-d55ed9cf2c0d-kube-api-access-fvl6w\") pod \"kindnet-crv85\" (UID: \"ee0e8846-0f87-4847-a24a-d55ed9cf2c0d\") " pod="kube-system/kindnet-crv85"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.067045    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9ad92875-26b3-43b9-8680-17253a8d35d2-lib-modules\") pod \"kube-proxy-wbnjd\" (UID: \"9ad92875-26b3-43b9-8680-17253a8d35d2\") " pod="kube-system/kube-proxy-wbnjd"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.067064    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ee0e8846-0f87-4847-a24a-d55ed9cf2c0d-lib-modules\") pod \"kindnet-crv85\" (UID: \"ee0e8846-0f87-4847-a24a-d55ed9cf2c0d\") " pod="kube-system/kindnet-crv85"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.183522    1478 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:00:25 embed-certs-672503 kubelet[1478]: I1123 09:00:25.873934    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-crv85" podStartSLOduration=1.873916832 podStartE2EDuration="1.873916832s" podCreationTimestamp="2025-11-23 09:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:25.873584268 +0000 UTC m=+6.284825508" watchObservedRunningTime="2025-11-23 09:00:25.873916832 +0000 UTC m=+6.285158080"
	Nov 23 09:00:26 embed-certs-672503 kubelet[1478]: I1123 09:00:26.851922    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wbnjd" podStartSLOduration=2.8519019979999998 podStartE2EDuration="2.851901998s" podCreationTimestamp="2025-11-23 09:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:00:26.851849075 +0000 UTC m=+7.263090331" watchObservedRunningTime="2025-11-23 09:00:26.851901998 +0000 UTC m=+7.263143238"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.276846    1478 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.524588    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47a7c798-9292-4915-96ab-78980671decb-config-volume\") pod \"coredns-66bc5c9577-nhnbc\" (UID: \"47a7c798-9292-4915-96ab-78980671decb\") " pod="kube-system/coredns-66bc5c9577-nhnbc"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.524656    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47f41b96-311e-4020-87db-b84c42d71ba8-tmp\") pod \"storage-provisioner\" (UID: \"47f41b96-311e-4020-87db-b84c42d71ba8\") " pod="kube-system/storage-provisioner"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.524680    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6vz8\" (UniqueName: \"kubernetes.io/projected/47f41b96-311e-4020-87db-b84c42d71ba8-kube-api-access-l6vz8\") pod \"storage-provisioner\" (UID: \"47f41b96-311e-4020-87db-b84c42d71ba8\") " pod="kube-system/storage-provisioner"
	Nov 23 09:01:06 embed-certs-672503 kubelet[1478]: I1123 09:01:06.524705    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkl6\" (UniqueName: \"kubernetes.io/projected/47a7c798-9292-4915-96ab-78980671decb-kube-api-access-5lkl6\") pod \"coredns-66bc5c9577-nhnbc\" (UID: \"47a7c798-9292-4915-96ab-78980671decb\") " pod="kube-system/coredns-66bc5c9577-nhnbc"
	Nov 23 09:01:07 embed-certs-672503 kubelet[1478]: I1123 09:01:07.992169    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.992139361 podStartE2EDuration="41.992139361s" podCreationTimestamp="2025-11-23 09:00:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:01:07.001137529 +0000 UTC m=+47.412378769" watchObservedRunningTime="2025-11-23 09:01:07.992139361 +0000 UTC m=+48.403380601"
	Nov 23 09:01:08 embed-certs-672503 kubelet[1478]: I1123 09:01:08.015184    1478 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nhnbc" podStartSLOduration=44.015149346 podStartE2EDuration="44.015149346s" podCreationTimestamp="2025-11-23 09:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:01:07.993329803 +0000 UTC m=+48.404571059" watchObservedRunningTime="2025-11-23 09:01:08.015149346 +0000 UTC m=+48.426390586"
	Nov 23 09:01:10 embed-certs-672503 kubelet[1478]: I1123 09:01:10.652632    1478 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77ws9\" (UniqueName: \"kubernetes.io/projected/b98565e7-4d04-4d9a-b95e-186c353129dc-kube-api-access-77ws9\") pod \"busybox\" (UID: \"b98565e7-4d04-4d9a-b95e-186c353129dc\") " pod="default/busybox"
	
	
	==> storage-provisioner [ce730c79fdfcd3dd03d8c3332496eb53c661cab9fd6e3d375d3b44e79a551d4f] <==
	I1123 09:01:06.949523       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:01:06.965358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:07.011715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:01:07.011972       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:01:07.013386       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a51e74d1-c070-46d1-896d-b299af8b25af", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-672503_186aa864-b68f-4600-b9f5-1419bffbdf2a became leader
	I1123 09:01:07.015437       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-672503_186aa864-b68f-4600-b9f5-1419bffbdf2a!
	W1123 09:01:07.023097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:07.030111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:01:07.116226       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-672503_186aa864-b68f-4600-b9f5-1419bffbdf2a!
	W1123 09:01:09.033080       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:09.038348       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:11.058465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:11.073084       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:13.076301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:13.084900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:15.092712       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:15.103934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:17.108483       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:17.119699       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.123426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:19.131197       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:21.135384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:21.142120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:23.145747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:01:23.153555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-672503 -n embed-certs-672503
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-672503 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (15.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-052851 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0582e2dd-ee3f-4204-8ea2-0e7de31689f5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0582e2dd-ee3f-4204-8ea2-0e7de31689f5] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.007253257s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-052851 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-052851
helpers_test.go:243: (dbg) docker inspect no-preload-052851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8",
	        "Created": "2025-11-23T09:02:45.067625473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:02:45.223662085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8/hosts",
	        "LogPath": "/var/lib/docker/containers/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8-json.log",
	        "Name": "/no-preload-052851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-052851:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-052851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8",
	                "LowerDir": "/var/lib/docker/overlay2/fb27790462d65d381a9001812d219037b499bdb7891aa84bdd9348fb9b016f77-init/diff:/var/lib/docker/overlay2/e1de88c117c0c773e1fa636243190fd97eadaa5a8e1ee08fd53827cbac767d35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fb27790462d65d381a9001812d219037b499bdb7891aa84bdd9348fb9b016f77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fb27790462d65d381a9001812d219037b499bdb7891aa84bdd9348fb9b016f77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fb27790462d65d381a9001812d219037b499bdb7891aa84bdd9348fb9b016f77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-052851",
	                "Source": "/var/lib/docker/volumes/no-preload-052851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-052851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-052851",
	                "name.minikube.sigs.k8s.io": "no-preload-052851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61757e726949ec317d333580dceb584515396f09f97f879a2ecec88e966aaa41",
	            "SandboxKey": "/var/run/docker/netns/61757e726949",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-052851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:77:ca:0e:a2:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "80b14e15ba547769052a662f0ea612743dcde4f1cb9583a7d03955199c63e88b",
	                    "EndpointID": "3da34e990daebf013ac957b849a450a8b8812c70dc1edbc48f8a070c2ba49e2b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-052851",
	                        "b96168143a8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-052851 -n no-preload-052851
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-052851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-052851 logs -n 25: (1.337802304s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-672503 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:02 UTC │
	│ image   │ default-k8s-diff-port-118762 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ pause   │ -p default-k8s-diff-port-118762 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ unpause │ -p default-k8s-diff-port-118762 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p default-k8s-diff-port-118762                                                                                                                                                                                                                     │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p default-k8s-diff-port-118762                                                                                                                                                                                                                     │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p disable-driver-mounts-209145                                                                                                                                                                                                                     │ disable-driver-mounts-209145 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p no-preload-052851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-052851            │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:03 UTC │
	│ image   │ embed-certs-672503 image list --format=json                                                                                                                                                                                                         │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ pause   │ -p embed-certs-672503 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ unpause │ -p embed-certs-672503 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p embed-certs-672503                                                                                                                                                                                                                               │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p embed-certs-672503                                                                                                                                                                                                                               │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p newest-cni-948460 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:03 UTC │
	│ addons  │ enable metrics-server -p newest-cni-948460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:03 UTC │ 23 Nov 25 09:03 UTC │
	│ stop    │ -p newest-cni-948460 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:03 UTC │ 23 Nov 25 09:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-948460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:03 UTC │ 23 Nov 25 09:03 UTC │
	│ start   │ -p newest-cni-948460 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:03 UTC │ 23 Nov 25 09:04 UTC │
	│ image   │ newest-cni-948460 image list --format=json                                                                                                                                                                                                          │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ pause   │ -p newest-cni-948460 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ unpause │ -p newest-cni-948460 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ delete  │ -p newest-cni-948460                                                                                                                                                                                                                                │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ delete  │ -p newest-cni-948460                                                                                                                                                                                                                                │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ start   │ -p auto-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-694698                  │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:04:07
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:04:07.656917  241302 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:04:07.657129  241302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:04:07.657159  241302 out.go:374] Setting ErrFile to fd 2...
	I1123 09:04:07.657179  241302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:04:07.657466  241302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 09:04:07.657922  241302 out.go:368] Setting JSON to false
	I1123 09:04:07.658935  241302 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6400,"bootTime":1763882248,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 09:04:07.659036  241302 start.go:143] virtualization:  
	I1123 09:04:07.663225  241302 out.go:179] * [auto-694698] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:04:07.666798  241302 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:04:07.666882  241302 notify.go:221] Checking for updates...
	I1123 09:04:07.673644  241302 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:04:07.676787  241302 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:04:07.679790  241302 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 09:04:07.682828  241302 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:04:07.685752  241302 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:04:07.689354  241302 config.go:182] Loaded profile config "no-preload-052851": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:04:07.689499  241302 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:04:07.716747  241302 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:04:07.716868  241302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:04:07.785855  241302 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 09:04:07.776345247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:04:07.785964  241302 docker.go:319] overlay module found
	I1123 09:04:07.789082  241302 out.go:179] * Using the docker driver based on user configuration
	I1123 09:04:07.792071  241302 start.go:309] selected driver: docker
	I1123 09:04:07.792099  241302 start.go:927] validating driver "docker" against <nil>
	I1123 09:04:07.792123  241302 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:04:07.792876  241302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:04:07.852979  241302 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 09:04:07.84398739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:04:07.853158  241302 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:04:07.853414  241302 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:04:07.856440  241302 out.go:179] * Using Docker driver with root privileges
	I1123 09:04:07.859337  241302 cni.go:84] Creating CNI manager for ""
	I1123 09:04:07.859411  241302 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:04:07.859426  241302 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:04:07.859505  241302 start.go:353] cluster config:
	{Name:auto-694698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-694698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:04:07.862614  241302 out.go:179] * Starting "auto-694698" primary control-plane node in "auto-694698" cluster
	I1123 09:04:07.865432  241302 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:04:07.868375  241302 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:04:07.871158  241302 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:04:07.871206  241302 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 09:04:07.871214  241302 cache.go:65] Caching tarball of preloaded images
	I1123 09:04:07.871285  241302 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:04:07.871295  241302 preload.go:238] Found /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 09:04:07.871484  241302 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:04:07.871650  241302 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/config.json ...
	I1123 09:04:07.871682  241302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/config.json: {Name:mk04e44bdc5e4436c6e1bf3178f0ce5d5347dd50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:04:07.890956  241302 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:04:07.890979  241302 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:04:07.891008  241302 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:04:07.891038  241302 start.go:360] acquireMachinesLock for auto-694698: {Name:mkd24b699cf00e4f5b60bd47a41ba901a356997a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:04:07.891163  241302 start.go:364] duration metric: took 104.837µs to acquireMachinesLock for "auto-694698"
	I1123 09:04:07.891195  241302 start.go:93] Provisioning new machine with config: &{Name:auto-694698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-694698 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:04:07.891266  241302 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	251af50c56799       1611cd07b61d5       8 seconds ago       Running             busybox                   0                   fd6cd83190c6b       busybox                                     default
	728267be9c9dd       138784d87c9c5       15 seconds ago      Running             coredns                   0                   e4a45d9e79e33       coredns-66bc5c9577-7gp9k                    kube-system
	67799c0024a71       66749159455b3       15 seconds ago      Running             storage-provisioner       0                   98b414b9abda1       storage-provisioner                         kube-system
	9962d5768cb79       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   af8bf22dd4e5e       kindnet-9gcl6                               kube-system
	243fad4697ab1       05baa95f5142d       29 seconds ago      Running             kube-proxy                0                   791bab83d73ce       kube-proxy-mtj7d                            kube-system
	fa11ed059cda7       43911e833d64d       47 seconds ago      Running             kube-apiserver            0                   d3f39c0e1a0b5       kube-apiserver-no-preload-052851            kube-system
	8cd6a5c35e5fe       b5f57ec6b9867       47 seconds ago      Running             kube-scheduler            0                   700c0c58dbbe1       kube-scheduler-no-preload-052851            kube-system
	5b84ffb984802       7eb2c6ff0c5a7       48 seconds ago      Running             kube-controller-manager   0                   d8dac44bd0635       kube-controller-manager-no-preload-052851   kube-system
	b0d6788efdcdb       a1894772a478e       48 seconds ago      Running             etcd                      0                   af5ad45e26e8b       etcd-no-preload-052851                      kube-system
	
	
	==> containerd <==
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.411276797Z" level=info msg="CreateContainer within sandbox \"98b414b9abda1760acf3a59e7faf08acfddb5a5759cf9cde7018d20180a5cac6\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f\""
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.415315083Z" level=info msg="StartContainer for \"67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f\""
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.416481493Z" level=info msg="connecting to shim 67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f" address="unix:///run/containerd/s/c5d1f0a3eaf09980d9f70facabc993b4c2e00a1cbf2d86210452a4df20beab1f" protocol=ttrpc version=3
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.444173654Z" level=info msg="Container 728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.468802274Z" level=info msg="CreateContainer within sandbox \"e4a45d9e79e33995cf1d4f454e329525d1df6dd1d69790f3242813edef439347\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784\""
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.482603193Z" level=info msg="StartContainer for \"728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784\""
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.485182529Z" level=info msg="connecting to shim 728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784" address="unix:///run/containerd/s/80eaff74f37e23eef2e57f55250a56d12aaa5bd6343f053f8474ce59a9d8b555" protocol=ttrpc version=3
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.587683504Z" level=info msg="StartContainer for \"67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f\" returns successfully"
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.691745714Z" level=info msg="StartContainer for \"728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784\" returns successfully"
	Nov 23 09:04:00 no-preload-052851 containerd[759]: time="2025-11-23T09:04:00.600372578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0582e2dd-ee3f-4204-8ea2-0e7de31689f5,Namespace:default,Attempt:0,}"
	Nov 23 09:04:00 no-preload-052851 containerd[759]: time="2025-11-23T09:04:00.675308630Z" level=info msg="connecting to shim fd6cd83190c6b94db4dea88e625d9aae2405585b7e6f9d83d8b7fbd0695962b7" address="unix:///run/containerd/s/d522c789b6a22b66268703fe51cb03379ef661a01f3a01edda6c2933a9396a3c" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:04:00 no-preload-052851 containerd[759]: time="2025-11-23T09:04:00.797212058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0582e2dd-ee3f-4204-8ea2-0e7de31689f5,Namespace:default,Attempt:0,} returns sandbox id \"fd6cd83190c6b94db4dea88e625d9aae2405585b7e6f9d83d8b7fbd0695962b7\""
	Nov 23 09:04:00 no-preload-052851 containerd[759]: time="2025-11-23T09:04:00.806611349Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.906514799Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.909872948Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.912469081Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.916714737Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.918162453Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.111504442s"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.918479814Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.930212277Z" level=info msg="CreateContainer within sandbox \"fd6cd83190c6b94db4dea88e625d9aae2405585b7e6f9d83d8b7fbd0695962b7\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.956003213Z" level=info msg="Container 251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.966118720Z" level=info msg="CreateContainer within sandbox \"fd6cd83190c6b94db4dea88e625d9aae2405585b7e6f9d83d8b7fbd0695962b7\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc\""
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.968536537Z" level=info msg="StartContainer for \"251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc\""
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.969582437Z" level=info msg="connecting to shim 251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc" address="unix:///run/containerd/s/d522c789b6a22b66268703fe51cb03379ef661a01f3a01edda6c2933a9396a3c" protocol=ttrpc version=3
	Nov 23 09:04:03 no-preload-052851 containerd[759]: time="2025-11-23T09:04:03.084173157Z" level=info msg="StartContainer for \"251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc\" returns successfully"
	
	
	==> coredns [728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33279 - 12748 "HINFO IN 8031407418468007153.8116781740704995630. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.053909766s
	
	
	==> describe nodes <==
	Name:               no-preload-052851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-052851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-052851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_03_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:03:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-052851
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:04:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:04:05 +0000   Sun, 23 Nov 2025 09:03:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:04:05 +0000   Sun, 23 Nov 2025 09:03:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:04:05 +0000   Sun, 23 Nov 2025 09:03:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:04:05 +0000   Sun, 23 Nov 2025 09:03:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-052851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                a506858d-1181-4822-a17a-b714d78afbb5
	  Boot ID:                    86d8501c-1df5-4d7e-90cb-d9ad951202c5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-7gp9k                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-052851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         37s
	  kube-system                 kindnet-9gcl6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-no-preload-052851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-controller-manager-no-preload-052851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-mtj7d                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-no-preload-052851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   NodeAllocatableEnforced  49s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 49s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  49s (x8 over 49s)  kubelet          Node no-preload-052851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    49s (x8 over 49s)  kubelet          Node no-preload-052851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     49s (x7 over 49s)  kubelet          Node no-preload-052851 status is now: NodeHasSufficientPID
	  Normal   Starting                 49s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 36s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 36s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  36s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  36s                kubelet          Node no-preload-052851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    36s                kubelet          Node no-preload-052851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     36s                kubelet          Node no-preload-052851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           31s                node-controller  Node no-preload-052851 event: Registered Node no-preload-052851 in Controller
	  Normal   NodeReady                16s                kubelet          Node no-preload-052851 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014670] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505841] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033008] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738583] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.057424] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:26] hrtimer: interrupt took 58442338 ns
	
	
	==> etcd [b0d6788efdcdb06f98fafe1108b059951e13e8001f59a53429489d87bdba3fff] <==
	{"level":"warn","ts":"2025-11-23T09:03:28.850076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:28.923988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.003254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.061874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.124027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.147230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.190502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.216372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.245769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.279757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.312516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.335654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.366286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.440296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.474540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.507259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.529181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.572650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.621408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.710601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.749226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.781632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.827980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.858137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:30.003194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50768","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:04:11 up  1:46,  0 user,  load average: 6.31, 4.54, 3.44
	Linux no-preload-052851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9962d5768cb79509a2fa79c071471452fc95d1b60514007b6dce165a9353cbda] <==
	I1123 09:03:45.433548       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:03:45.433821       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:03:45.433975       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:03:45.433988       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:03:45.434003       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:03:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:03:45.723760       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:03:45.724215       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:03:45.724337       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:03:45.724644       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:03:45.924816       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:03:45.924843       1 metrics.go:72] Registering metrics
	I1123 09:03:45.924892       1 controller.go:711] "Syncing nftables rules"
	I1123 09:03:55.643438       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:03:55.643492       1 main.go:301] handling current node
	I1123 09:04:05.639509       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:04:05.639553       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fa11ed059cda74006b872347a93936aa47ce76fa8d40cee0d71eafc335fb60da] <==
	I1123 09:03:32.040493       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:03:32.045359       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:03:32.045401       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 09:03:32.100271       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:03:32.105336       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:03:32.141292       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:03:32.149648       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:03:32.420717       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:03:32.432059       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:03:32.432294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:03:34.010629       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:03:34.172193       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:03:34.411656       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:03:34.431897       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 09:03:34.436375       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:03:34.446287       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:03:35.091544       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:03:35.204581       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:03:35.253102       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:03:35.277542       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:03:40.925398       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:03:40.944550       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:03:40.978651       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:03:41.221500       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:04:10.503042       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:33096: use of closed network connection
	
	
	==> kube-controller-manager [5b84ffb9848024293ad234427b40d5770e927e950b0168f0038cfc367da02878] <==
	I1123 09:03:40.185569       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-052851" podCIDRs=["10.244.0.0/24"]
	I1123 09:03:40.196935       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:03:40.208337       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:03:40.218379       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:03:40.219582       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:03:40.219599       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:03:40.225359       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:03:40.225943       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:03:40.226102       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:03:40.226253       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:03:40.229002       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:03:40.219611       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:03:40.234028       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:03:40.235292       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:03:40.240594       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:03:40.240813       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:03:40.240990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-052851"
	I1123 09:03:40.241071       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 09:03:40.249857       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:03:40.256263       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:03:40.289729       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:03:40.289759       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:03:40.289767       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:03:40.290409       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:04:00.250892       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [243fad4697ab1802b6ec6733d5e2bf553ca67c522eacf4f66eea2479678b37d9] <==
	I1123 09:03:42.507862       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:03:42.611420       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:03:42.711934       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:03:42.711966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:03:42.712045       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:03:43.034917       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:03:43.034994       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:03:43.061282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:03:43.061698       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:03:43.061715       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:03:43.068729       1 config.go:200] "Starting service config controller"
	I1123 09:03:43.068742       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:03:43.068757       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:03:43.068762       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:03:43.068772       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:03:43.068775       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:03:43.077307       1 config.go:309] "Starting node config controller"
	I1123 09:03:43.077369       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:03:43.077395       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:03:43.170117       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:03:43.170154       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:03:43.170200       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8cd6a5c35e5fedbc0c47463b2e54e04aa7dc240745f63616e6000ce5994b1a80] <==
	I1123 09:03:26.899965       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:03:33.597693       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:03:33.599646       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:03:33.599788       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:03:33.599884       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:03:33.632647       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:03:33.632874       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:03:33.647227       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:03:33.649532       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:03:33.648529       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:03:33.648500       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1123 09:03:33.678867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 09:03:34.650353       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:03:36 no-preload-052851 kubelet[2101]: I1123 09:03:36.410618    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-052851" podStartSLOduration=1.410603022 podStartE2EDuration="1.410603022s" podCreationTimestamp="2025-11-23 09:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:36.410373211 +0000 UTC m=+1.379421480" watchObservedRunningTime="2025-11-23 09:03:36.410603022 +0000 UTC m=+1.379651291"
	Nov 23 09:03:36 no-preload-052851 kubelet[2101]: I1123 09:03:36.440925    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-052851" podStartSLOduration=2.440903916 podStartE2EDuration="2.440903916s" podCreationTimestamp="2025-11-23 09:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:36.424241493 +0000 UTC m=+1.393289779" watchObservedRunningTime="2025-11-23 09:03:36.440903916 +0000 UTC m=+1.409952186"
	Nov 23 09:03:36 no-preload-052851 kubelet[2101]: I1123 09:03:36.441053    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-052851" podStartSLOduration=1.441047613 podStartE2EDuration="1.441047613s" podCreationTimestamp="2025-11-23 09:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:36.440992548 +0000 UTC m=+1.410040826" watchObservedRunningTime="2025-11-23 09:03:36.441047613 +0000 UTC m=+1.410095916"
	Nov 23 09:03:36 no-preload-052851 kubelet[2101]: I1123 09:03:36.453526    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-052851" podStartSLOduration=1.453487076 podStartE2EDuration="1.453487076s" podCreationTimestamp="2025-11-23 09:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:36.452855678 +0000 UTC m=+1.421903956" watchObservedRunningTime="2025-11-23 09:03:36.453487076 +0000 UTC m=+1.422535370"
	Nov 23 09:03:40 no-preload-052851 kubelet[2101]: I1123 09:03:40.215139    2101 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:03:40 no-preload-052851 kubelet[2101]: I1123 09:03:40.217396    2101 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.288251    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a50da472-64e4-4d6f-8ef5-cb86341ccc6e-lib-modules\") pod \"kindnet-9gcl6\" (UID: \"a50da472-64e4-4d6f-8ef5-cb86341ccc6e\") " pod="kube-system/kindnet-9gcl6"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.288303    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a50da472-64e4-4d6f-8ef5-cb86341ccc6e-cni-cfg\") pod \"kindnet-9gcl6\" (UID: \"a50da472-64e4-4d6f-8ef5-cb86341ccc6e\") " pod="kube-system/kindnet-9gcl6"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.288335    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a50da472-64e4-4d6f-8ef5-cb86341ccc6e-xtables-lock\") pod \"kindnet-9gcl6\" (UID: \"a50da472-64e4-4d6f-8ef5-cb86341ccc6e\") " pod="kube-system/kindnet-9gcl6"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.288357    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltb8c\" (UniqueName: \"kubernetes.io/projected/a50da472-64e4-4d6f-8ef5-cb86341ccc6e-kube-api-access-ltb8c\") pod \"kindnet-9gcl6\" (UID: \"a50da472-64e4-4d6f-8ef5-cb86341ccc6e\") " pod="kube-system/kindnet-9gcl6"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.389982    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a839c553-2909-425b-854a-e85f73ec466b-xtables-lock\") pod \"kube-proxy-mtj7d\" (UID: \"a839c553-2909-425b-854a-e85f73ec466b\") " pod="kube-system/kube-proxy-mtj7d"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.390437    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq844\" (UniqueName: \"kubernetes.io/projected/a839c553-2909-425b-854a-e85f73ec466b-kube-api-access-zq844\") pod \"kube-proxy-mtj7d\" (UID: \"a839c553-2909-425b-854a-e85f73ec466b\") " pod="kube-system/kube-proxy-mtj7d"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.392603    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a839c553-2909-425b-854a-e85f73ec466b-kube-proxy\") pod \"kube-proxy-mtj7d\" (UID: \"a839c553-2909-425b-854a-e85f73ec466b\") " pod="kube-system/kube-proxy-mtj7d"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.392635    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a839c553-2909-425b-854a-e85f73ec466b-lib-modules\") pod \"kube-proxy-mtj7d\" (UID: \"a839c553-2909-425b-854a-e85f73ec466b\") " pod="kube-system/kube-proxy-mtj7d"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.485391    2101 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:03:43 no-preload-052851 kubelet[2101]: I1123 09:03:43.753099    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mtj7d" podStartSLOduration=2.753074657 podStartE2EDuration="2.753074657s" podCreationTimestamp="2025-11-23 09:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:43.473714107 +0000 UTC m=+8.442762385" watchObservedRunningTime="2025-11-23 09:03:43.753074657 +0000 UTC m=+8.722122927"
	Nov 23 09:03:45 no-preload-052851 kubelet[2101]: I1123 09:03:45.865549    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9gcl6" podStartSLOduration=1.8602693000000001 podStartE2EDuration="4.865532147s" podCreationTimestamp="2025-11-23 09:03:41 +0000 UTC" firstStartedPulling="2025-11-23 09:03:41.913980689 +0000 UTC m=+6.883028967" lastFinishedPulling="2025-11-23 09:03:44.919243544 +0000 UTC m=+9.888291814" observedRunningTime="2025-11-23 09:03:45.47269693 +0000 UTC m=+10.441745216" watchObservedRunningTime="2025-11-23 09:03:45.865532147 +0000 UTC m=+10.834580417"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.696297    2101 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.930577    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srk47\" (UniqueName: \"kubernetes.io/projected/89dd4f3d-9f02-4d47-80e9-5a15ecf3073a-kube-api-access-srk47\") pod \"coredns-66bc5c9577-7gp9k\" (UID: \"89dd4f3d-9f02-4d47-80e9-5a15ecf3073a\") " pod="kube-system/coredns-66bc5c9577-7gp9k"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.930830    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndgt9\" (UniqueName: \"kubernetes.io/projected/1f31af32-ade0-48e9-b1c6-c8db55be2faa-kube-api-access-ndgt9\") pod \"storage-provisioner\" (UID: \"1f31af32-ade0-48e9-b1c6-c8db55be2faa\") " pod="kube-system/storage-provisioner"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.930960    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1f31af32-ade0-48e9-b1c6-c8db55be2faa-tmp\") pod \"storage-provisioner\" (UID: \"1f31af32-ade0-48e9-b1c6-c8db55be2faa\") " pod="kube-system/storage-provisioner"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.931074    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89dd4f3d-9f02-4d47-80e9-5a15ecf3073a-config-volume\") pod \"coredns-66bc5c9577-7gp9k\" (UID: \"89dd4f3d-9f02-4d47-80e9-5a15ecf3073a\") " pod="kube-system/coredns-66bc5c9577-7gp9k"
	Nov 23 09:03:57 no-preload-052851 kubelet[2101]: I1123 09:03:57.572787    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7gp9k" podStartSLOduration=16.572768248 podStartE2EDuration="16.572768248s" podCreationTimestamp="2025-11-23 09:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:57.53829482 +0000 UTC m=+22.507343090" watchObservedRunningTime="2025-11-23 09:03:57.572768248 +0000 UTC m=+22.541816526"
	Nov 23 09:03:57 no-preload-052851 kubelet[2101]: I1123 09:03:57.597863    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.597847169 podStartE2EDuration="14.597847169s" podCreationTimestamp="2025-11-23 09:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:57.597682852 +0000 UTC m=+22.566731130" watchObservedRunningTime="2025-11-23 09:03:57.597847169 +0000 UTC m=+22.566895447"
	Nov 23 09:04:00 no-preload-052851 kubelet[2101]: I1123 09:04:00.370963    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvwk\" (UniqueName: \"kubernetes.io/projected/0582e2dd-ee3f-4204-8ea2-0e7de31689f5-kube-api-access-7kvwk\") pod \"busybox\" (UID: \"0582e2dd-ee3f-4204-8ea2-0e7de31689f5\") " pod="default/busybox"
	
	
	==> storage-provisioner [67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f] <==
	I1123 09:03:56.604101       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:03:56.686199       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:03:56.686313       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:03:56.705193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:56.879635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:03:56.879987       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:03:56.880810       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-052851_20e7a2b5-e2fe-43b0-95d7-9fa66275f583!
	I1123 09:03:56.880605       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db9cb262-a470-4bae-817c-b27436922400", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-052851_20e7a2b5-e2fe-43b0-95d7-9fa66275f583 became leader
	W1123 09:03:56.903412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:56.911629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:03:56.985005       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-052851_20e7a2b5-e2fe-43b0-95d7-9fa66275f583!
	W1123 09:03:58.914636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:58.920226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:00.923745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:00.931687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:02.937494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:02.942711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:04.946314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:04.952291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:06.956793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:06.962536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:08.965523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:08.976143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:10.982203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:10.990613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-052851 -n no-preload-052851
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-052851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-052851
helpers_test.go:243: (dbg) docker inspect no-preload-052851:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8",
	        "Created": "2025-11-23T09:02:45.067625473Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 230823,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:02:45.223662085Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:572c983e466f1f784136812eef5cc59ac623db764bc7704d3676c4643993fd08",
	        "ResolvConfPath": "/var/lib/docker/containers/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8/hostname",
	        "HostsPath": "/var/lib/docker/containers/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8/hosts",
	        "LogPath": "/var/lib/docker/containers/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8/b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8-json.log",
	        "Name": "/no-preload-052851",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-052851:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-052851",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b96168143a8d2d72e7d7ee1d73f03043b190a040ec0e00ee29c7175dff64d1f8",
	                "LowerDir": "/var/lib/docker/overlay2/fb27790462d65d381a9001812d219037b499bdb7891aa84bdd9348fb9b016f77-init/diff:/var/lib/docker/overlay2/e1de88c117c0c773e1fa636243190fd97eadaa5a8e1ee08fd53827cbac767d35/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fb27790462d65d381a9001812d219037b499bdb7891aa84bdd9348fb9b016f77/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fb27790462d65d381a9001812d219037b499bdb7891aa84bdd9348fb9b016f77/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fb27790462d65d381a9001812d219037b499bdb7891aa84bdd9348fb9b016f77/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-052851",
	                "Source": "/var/lib/docker/volumes/no-preload-052851/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-052851",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-052851",
	                "name.minikube.sigs.k8s.io": "no-preload-052851",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "61757e726949ec317d333580dceb584515396f09f97f879a2ecec88e966aaa41",
	            "SandboxKey": "/var/run/docker/netns/61757e726949",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-052851": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "da:77:ca:0e:a2:93",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "80b14e15ba547769052a662f0ea612743dcde4f1cb9583a7d03955199c63e88b",
	                    "EndpointID": "3da34e990daebf013ac957b849a450a8b8812c70dc1edbc48f8a070c2ba49e2b",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-052851",
	                        "b96168143a8d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-052851 -n no-preload-052851
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-052851 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-052851 logs -n 25: (1.970738861s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable dashboard -p embed-certs-672503 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:02 UTC │
	│ image   │ default-k8s-diff-port-118762 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ pause   │ -p default-k8s-diff-port-118762 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ unpause │ -p default-k8s-diff-port-118762 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p default-k8s-diff-port-118762                                                                                                                                                                                                                     │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p default-k8s-diff-port-118762                                                                                                                                                                                                                     │ default-k8s-diff-port-118762 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p disable-driver-mounts-209145                                                                                                                                                                                                                     │ disable-driver-mounts-209145 │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p no-preload-052851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-052851            │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:03 UTC │
	│ image   │ embed-certs-672503 image list --format=json                                                                                                                                                                                                         │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ pause   │ -p embed-certs-672503 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ unpause │ -p embed-certs-672503 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p embed-certs-672503                                                                                                                                                                                                                               │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p embed-certs-672503                                                                                                                                                                                                                               │ embed-certs-672503           │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p newest-cni-948460 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:03 UTC │
	│ addons  │ enable metrics-server -p newest-cni-948460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:03 UTC │ 23 Nov 25 09:03 UTC │
	│ stop    │ -p newest-cni-948460 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:03 UTC │ 23 Nov 25 09:03 UTC │
	│ addons  │ enable dashboard -p newest-cni-948460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:03 UTC │ 23 Nov 25 09:03 UTC │
	│ start   │ -p newest-cni-948460 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:03 UTC │ 23 Nov 25 09:04 UTC │
	│ image   │ newest-cni-948460 image list --format=json                                                                                                                                                                                                          │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ pause   │ -p newest-cni-948460 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ unpause │ -p newest-cni-948460 --alsologtostderr -v=1                                                                                                                                                                                                         │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ delete  │ -p newest-cni-948460                                                                                                                                                                                                                                │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ delete  │ -p newest-cni-948460                                                                                                                                                                                                                                │ newest-cni-948460            │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │ 23 Nov 25 09:04 UTC │
	│ start   │ -p auto-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd                                                                                                                       │ auto-694698                  │ jenkins │ v1.37.0 │ 23 Nov 25 09:04 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:04:07
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:04:07.656917  241302 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:04:07.657129  241302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:04:07.657159  241302 out.go:374] Setting ErrFile to fd 2...
	I1123 09:04:07.657179  241302 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:04:07.657466  241302 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 09:04:07.657922  241302 out.go:368] Setting JSON to false
	I1123 09:04:07.658935  241302 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":6400,"bootTime":1763882248,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 09:04:07.659036  241302 start.go:143] virtualization:  
	I1123 09:04:07.663225  241302 out.go:179] * [auto-694698] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 09:04:07.666798  241302 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:04:07.666882  241302 notify.go:221] Checking for updates...
	I1123 09:04:07.673644  241302 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:04:07.676787  241302 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 09:04:07.679790  241302 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 09:04:07.682828  241302 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 09:04:07.685752  241302 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:04:07.689354  241302 config.go:182] Loaded profile config "no-preload-052851": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 09:04:07.689499  241302 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:04:07.716747  241302 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 09:04:07.716868  241302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:04:07.785855  241302 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 09:04:07.776345247 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:04:07.785964  241302 docker.go:319] overlay module found
	I1123 09:04:07.789082  241302 out.go:179] * Using the docker driver based on user configuration
	I1123 09:04:07.792071  241302 start.go:309] selected driver: docker
	I1123 09:04:07.792099  241302 start.go:927] validating driver "docker" against <nil>
	I1123 09:04:07.792123  241302 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:04:07.792876  241302 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:04:07.852979  241302 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 09:04:07.84398739 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 09:04:07.853158  241302 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:04:07.853414  241302 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:04:07.856440  241302 out.go:179] * Using Docker driver with root privileges
	I1123 09:04:07.859337  241302 cni.go:84] Creating CNI manager for ""
	I1123 09:04:07.859411  241302 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 09:04:07.859426  241302 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:04:07.859505  241302 start.go:353] cluster config:
	{Name:auto-694698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-694698 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:con
tainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:04:07.862614  241302 out.go:179] * Starting "auto-694698" primary control-plane node in "auto-694698" cluster
	I1123 09:04:07.865432  241302 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 09:04:07.868375  241302 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:04:07.871158  241302 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:04:07.871206  241302 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 09:04:07.871214  241302 cache.go:65] Caching tarball of preloaded images
	I1123 09:04:07.871285  241302 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:04:07.871295  241302 preload.go:238] Found /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1123 09:04:07.871484  241302 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 09:04:07.871650  241302 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/config.json ...
	I1123 09:04:07.871682  241302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/config.json: {Name:mk04e44bdc5e4436c6e1bf3178f0ce5d5347dd50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:04:07.890956  241302 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:04:07.890979  241302 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:04:07.891008  241302 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:04:07.891038  241302 start.go:360] acquireMachinesLock for auto-694698: {Name:mkd24b699cf00e4f5b60bd47a41ba901a356997a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:04:07.891163  241302 start.go:364] duration metric: took 104.837µs to acquireMachinesLock for "auto-694698"
	I1123 09:04:07.891195  241302 start.go:93] Provisioning new machine with config: &{Name:auto-694698 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-694698 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1123 09:04:07.891266  241302 start.go:125] createHost starting for "" (driver="docker")
	I1123 09:04:07.896563  241302 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:04:07.896812  241302 start.go:159] libmachine.API.Create for "auto-694698" (driver="docker")
	I1123 09:04:07.896858  241302 client.go:173] LocalClient.Create starting
	I1123 09:04:07.896931  241302 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/ca.pem
	I1123 09:04:07.896965  241302 main.go:143] libmachine: Decoding PEM data...
	I1123 09:04:07.896984  241302 main.go:143] libmachine: Parsing certificate...
	I1123 09:04:07.897044  241302 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-2811/.minikube/certs/cert.pem
	I1123 09:04:07.897071  241302 main.go:143] libmachine: Decoding PEM data...
	I1123 09:04:07.897086  241302 main.go:143] libmachine: Parsing certificate...
	I1123 09:04:07.897450  241302 cli_runner.go:164] Run: docker network inspect auto-694698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:04:07.913986  241302 cli_runner.go:211] docker network inspect auto-694698 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:04:07.914081  241302 network_create.go:284] running [docker network inspect auto-694698] to gather additional debugging logs...
	I1123 09:04:07.914105  241302 cli_runner.go:164] Run: docker network inspect auto-694698
	W1123 09:04:07.930060  241302 cli_runner.go:211] docker network inspect auto-694698 returned with exit code 1
	I1123 09:04:07.930091  241302 network_create.go:287] error running [docker network inspect auto-694698]: docker network inspect auto-694698: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-694698 not found
	I1123 09:04:07.930113  241302 network_create.go:289] output of [docker network inspect auto-694698]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-694698 not found
	
	** /stderr **
	I1123 09:04:07.930201  241302 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:04:07.947282  241302 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ab12b2c3b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:c9:6d:7b:80:76} reservation:<nil>}
	I1123 09:04:07.947736  241302 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-7f5e4a52a57c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:b6:1a:79:b2:02:66} reservation:<nil>}
	I1123 09:04:07.948162  241302 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-ed031858d624 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:82:47:7d:04:56:4a} reservation:<nil>}
	I1123 09:04:07.948683  241302 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019fc7a0}
	I1123 09:04:07.948711  241302 network_create.go:124] attempt to create docker network auto-694698 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 09:04:07.948804  241302 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-694698 auto-694698
	I1123 09:04:08.012667  241302 network_create.go:108] docker network auto-694698 192.168.76.0/24 created
	I1123 09:04:08.012701  241302 kic.go:121] calculated static IP "192.168.76.2" for the "auto-694698" container
	I1123 09:04:08.012779  241302 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:04:08.030932  241302 cli_runner.go:164] Run: docker volume create auto-694698 --label name.minikube.sigs.k8s.io=auto-694698 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:04:08.050504  241302 oci.go:103] Successfully created a docker volume auto-694698
	I1123 09:04:08.050612  241302 cli_runner.go:164] Run: docker run --rm --name auto-694698-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-694698 --entrypoint /usr/bin/test -v auto-694698:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:04:08.621529  241302 oci.go:107] Successfully prepared a docker volume auto-694698
	I1123 09:04:08.621595  241302 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 09:04:08.621607  241302 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:04:08.621699  241302 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v auto-694698:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	251af50c56799       1611cd07b61d5       11 seconds ago      Running             busybox                   0                   fd6cd83190c6b       busybox                                     default
	728267be9c9dd       138784d87c9c5       17 seconds ago      Running             coredns                   0                   e4a45d9e79e33       coredns-66bc5c9577-7gp9k                    kube-system
	67799c0024a71       66749159455b3       17 seconds ago      Running             storage-provisioner       0                   98b414b9abda1       storage-provisioner                         kube-system
	9962d5768cb79       b1a8c6f707935       29 seconds ago      Running             kindnet-cni               0                   af8bf22dd4e5e       kindnet-9gcl6                               kube-system
	243fad4697ab1       05baa95f5142d       31 seconds ago      Running             kube-proxy                0                   791bab83d73ce       kube-proxy-mtj7d                            kube-system
	fa11ed059cda7       43911e833d64d       50 seconds ago      Running             kube-apiserver            0                   d3f39c0e1a0b5       kube-apiserver-no-preload-052851            kube-system
	8cd6a5c35e5fe       b5f57ec6b9867       50 seconds ago      Running             kube-scheduler            0                   700c0c58dbbe1       kube-scheduler-no-preload-052851            kube-system
	5b84ffb984802       7eb2c6ff0c5a7       50 seconds ago      Running             kube-controller-manager   0                   d8dac44bd0635       kube-controller-manager-no-preload-052851   kube-system
	b0d6788efdcdb       a1894772a478e       50 seconds ago      Running             etcd                      0                   af5ad45e26e8b       etcd-no-preload-052851                      kube-system
	
	
	==> containerd <==
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.411276797Z" level=info msg="CreateContainer within sandbox \"98b414b9abda1760acf3a59e7faf08acfddb5a5759cf9cde7018d20180a5cac6\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f\""
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.415315083Z" level=info msg="StartContainer for \"67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f\""
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.416481493Z" level=info msg="connecting to shim 67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f" address="unix:///run/containerd/s/c5d1f0a3eaf09980d9f70facabc993b4c2e00a1cbf2d86210452a4df20beab1f" protocol=ttrpc version=3
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.444173654Z" level=info msg="Container 728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.468802274Z" level=info msg="CreateContainer within sandbox \"e4a45d9e79e33995cf1d4f454e329525d1df6dd1d69790f3242813edef439347\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784\""
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.482603193Z" level=info msg="StartContainer for \"728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784\""
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.485182529Z" level=info msg="connecting to shim 728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784" address="unix:///run/containerd/s/80eaff74f37e23eef2e57f55250a56d12aaa5bd6343f053f8474ce59a9d8b555" protocol=ttrpc version=3
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.587683504Z" level=info msg="StartContainer for \"67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f\" returns successfully"
	Nov 23 09:03:56 no-preload-052851 containerd[759]: time="2025-11-23T09:03:56.691745714Z" level=info msg="StartContainer for \"728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784\" returns successfully"
	Nov 23 09:04:00 no-preload-052851 containerd[759]: time="2025-11-23T09:04:00.600372578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0582e2dd-ee3f-4204-8ea2-0e7de31689f5,Namespace:default,Attempt:0,}"
	Nov 23 09:04:00 no-preload-052851 containerd[759]: time="2025-11-23T09:04:00.675308630Z" level=info msg="connecting to shim fd6cd83190c6b94db4dea88e625d9aae2405585b7e6f9d83d8b7fbd0695962b7" address="unix:///run/containerd/s/d522c789b6a22b66268703fe51cb03379ef661a01f3a01edda6c2933a9396a3c" namespace=k8s.io protocol=ttrpc version=3
	Nov 23 09:04:00 no-preload-052851 containerd[759]: time="2025-11-23T09:04:00.797212058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:0582e2dd-ee3f-4204-8ea2-0e7de31689f5,Namespace:default,Attempt:0,} returns sandbox id \"fd6cd83190c6b94db4dea88e625d9aae2405585b7e6f9d83d8b7fbd0695962b7\""
	Nov 23 09:04:00 no-preload-052851 containerd[759]: time="2025-11-23T09:04:00.806611349Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.906514799Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.909872948Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937184"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.912469081Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.916714737Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.918162453Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.111504442s"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.918479814Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.930212277Z" level=info msg="CreateContainer within sandbox \"fd6cd83190c6b94db4dea88e625d9aae2405585b7e6f9d83d8b7fbd0695962b7\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.956003213Z" level=info msg="Container 251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc: CDI devices from CRI Config.CDIDevices: []"
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.966118720Z" level=info msg="CreateContainer within sandbox \"fd6cd83190c6b94db4dea88e625d9aae2405585b7e6f9d83d8b7fbd0695962b7\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc\""
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.968536537Z" level=info msg="StartContainer for \"251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc\""
	Nov 23 09:04:02 no-preload-052851 containerd[759]: time="2025-11-23T09:04:02.969582437Z" level=info msg="connecting to shim 251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc" address="unix:///run/containerd/s/d522c789b6a22b66268703fe51cb03379ef661a01f3a01edda6c2933a9396a3c" protocol=ttrpc version=3
	Nov 23 09:04:03 no-preload-052851 containerd[759]: time="2025-11-23T09:04:03.084173157Z" level=info msg="StartContainer for \"251af50c56799510e172c14d607cb61ca2409100de70586ed502ff18b9ce96dc\" returns successfully"
	
	
	==> coredns [728267be9c9ddc3d21ba69e767b80bbd2c81472c6106e3792ee3cec88f2b4784] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33279 - 12748 "HINFO IN 8031407418468007153.8116781740704995630. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.053909766s
	
	
	==> describe nodes <==
	Name:               no-preload-052851
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-052851
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-052851
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_03_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:03:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-052851
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:04:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:04:05 +0000   Sun, 23 Nov 2025 09:03:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:04:05 +0000   Sun, 23 Nov 2025 09:03:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:04:05 +0000   Sun, 23 Nov 2025 09:03:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:04:05 +0000   Sun, 23 Nov 2025 09:03:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-052851
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 7283ea1857f18f20a875c29069214c9d
	  System UUID:                a506858d-1181-4822-a17a-b714d78afbb5
	  Boot ID:                    86d8501c-1df5-4d7e-90cb-d9ad951202c5
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-7gp9k                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     33s
	  kube-system                 etcd-no-preload-052851                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         40s
	  kube-system                 kindnet-9gcl6                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      33s
	  kube-system                 kube-apiserver-no-preload-052851             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-no-preload-052851    200m (10%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-proxy-mtj7d                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-scheduler-no-preload-052851             100m (5%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 31s                kube-proxy       
	  Normal   NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 52s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node no-preload-052851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node no-preload-052851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     52s (x7 over 52s)  kubelet          Node no-preload-052851 status is now: NodeHasSufficientPID
	  Normal   Starting                 52s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 39s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 39s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  39s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  39s                kubelet          Node no-preload-052851 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    39s                kubelet          Node no-preload-052851 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     39s                kubelet          Node no-preload-052851 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           34s                node-controller  Node no-preload-052851 event: Registered Node no-preload-052851 in Controller
	  Normal   NodeReady                19s                kubelet          Node no-preload-052851 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov23 07:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.014670] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.505841] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.033008] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.738583] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +7.057424] kauditd_printk_skb: 36 callbacks suppressed
	[Nov23 08:10] kauditd_printk_skb: 8 callbacks suppressed
	[Nov23 08:26] hrtimer: interrupt took 58442338 ns
	
	
	==> etcd [b0d6788efdcdb06f98fafe1108b059951e13e8001f59a53429489d87bdba3fff] <==
	{"level":"warn","ts":"2025-11-23T09:03:28.850076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:28.923988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.003254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.061874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.124027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.147230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.190502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.216372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.245769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.279757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.312516Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.335654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.366286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.440296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.474540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.507259Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.529181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.572650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.621408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.710601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.749226Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.781632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.827980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:29.858137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:03:30.003194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50768","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:04:14 up  1:46,  0 user,  load average: 6.05, 4.52, 3.44
	Linux no-preload-052851 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9962d5768cb79509a2fa79c071471452fc95d1b60514007b6dce165a9353cbda] <==
	I1123 09:03:45.433548       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:03:45.433821       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:03:45.433975       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:03:45.433988       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:03:45.434003       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:03:45Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:03:45.723760       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:03:45.724215       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:03:45.724337       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:03:45.724644       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:03:45.924816       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:03:45.924843       1 metrics.go:72] Registering metrics
	I1123 09:03:45.924892       1 controller.go:711] "Syncing nftables rules"
	I1123 09:03:55.643438       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:03:55.643492       1 main.go:301] handling current node
	I1123 09:04:05.639509       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:04:05.639553       1 main.go:301] handling current node
	
	
	==> kube-apiserver [fa11ed059cda74006b872347a93936aa47ce76fa8d40cee0d71eafc335fb60da] <==
	I1123 09:03:32.040493       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:03:32.045359       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:03:32.045401       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 09:03:32.100271       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:03:32.105336       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:03:32.141292       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:03:32.149648       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:03:32.420717       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:03:32.432059       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:03:32.432294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:03:34.010629       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:03:34.172193       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:03:34.411656       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:03:34.431897       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 09:03:34.436375       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:03:34.446287       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:03:35.091544       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:03:35.204581       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:03:35.253102       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:03:35.277542       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:03:40.925398       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:03:40.944550       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:03:40.978651       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:03:41.221500       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1123 09:04:10.503042       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:33096: use of closed network connection
	
	
	==> kube-controller-manager [5b84ffb9848024293ad234427b40d5770e927e950b0168f0038cfc367da02878] <==
	I1123 09:03:40.185569       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-052851" podCIDRs=["10.244.0.0/24"]
	I1123 09:03:40.196935       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:03:40.208337       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:03:40.218379       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:03:40.219582       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:03:40.219599       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:03:40.225359       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:03:40.225943       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:03:40.226102       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:03:40.226253       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:03:40.229002       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:03:40.219611       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:03:40.234028       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:03:40.235292       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:03:40.240594       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:03:40.240813       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:03:40.240990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-052851"
	I1123 09:03:40.241071       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 09:03:40.249857       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:03:40.256263       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:03:40.289729       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:03:40.289759       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:03:40.289767       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:03:40.290409       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:04:00.250892       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [243fad4697ab1802b6ec6733d5e2bf553ca67c522eacf4f66eea2479678b37d9] <==
	I1123 09:03:42.507862       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:03:42.611420       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:03:42.711934       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:03:42.711966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:03:42.712045       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:03:43.034917       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:03:43.034994       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:03:43.061282       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:03:43.061698       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:03:43.061715       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:03:43.068729       1 config.go:200] "Starting service config controller"
	I1123 09:03:43.068742       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:03:43.068757       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:03:43.068762       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:03:43.068772       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:03:43.068775       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:03:43.077307       1 config.go:309] "Starting node config controller"
	I1123 09:03:43.077369       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:03:43.077395       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:03:43.170117       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:03:43.170154       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:03:43.170200       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8cd6a5c35e5fedbc0c47463b2e54e04aa7dc240745f63616e6000ce5994b1a80] <==
	I1123 09:03:26.899965       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:03:33.597693       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:03:33.599646       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:03:33.599788       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:03:33.599884       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:03:33.632647       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:03:33.632874       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:03:33.647227       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:03:33.649532       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:03:33.648529       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:03:33.648500       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	E1123 09:03:33.678867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I1123 09:03:34.650353       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:03:36 no-preload-052851 kubelet[2101]: I1123 09:03:36.410618    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-052851" podStartSLOduration=1.410603022 podStartE2EDuration="1.410603022s" podCreationTimestamp="2025-11-23 09:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:36.410373211 +0000 UTC m=+1.379421480" watchObservedRunningTime="2025-11-23 09:03:36.410603022 +0000 UTC m=+1.379651291"
	Nov 23 09:03:36 no-preload-052851 kubelet[2101]: I1123 09:03:36.440925    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-052851" podStartSLOduration=2.440903916 podStartE2EDuration="2.440903916s" podCreationTimestamp="2025-11-23 09:03:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:36.424241493 +0000 UTC m=+1.393289779" watchObservedRunningTime="2025-11-23 09:03:36.440903916 +0000 UTC m=+1.409952186"
	Nov 23 09:03:36 no-preload-052851 kubelet[2101]: I1123 09:03:36.441053    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-052851" podStartSLOduration=1.441047613 podStartE2EDuration="1.441047613s" podCreationTimestamp="2025-11-23 09:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:36.440992548 +0000 UTC m=+1.410040826" watchObservedRunningTime="2025-11-23 09:03:36.441047613 +0000 UTC m=+1.410095916"
	Nov 23 09:03:36 no-preload-052851 kubelet[2101]: I1123 09:03:36.453526    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-052851" podStartSLOduration=1.453487076 podStartE2EDuration="1.453487076s" podCreationTimestamp="2025-11-23 09:03:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:36.452855678 +0000 UTC m=+1.421903956" watchObservedRunningTime="2025-11-23 09:03:36.453487076 +0000 UTC m=+1.422535370"
	Nov 23 09:03:40 no-preload-052851 kubelet[2101]: I1123 09:03:40.215139    2101 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:03:40 no-preload-052851 kubelet[2101]: I1123 09:03:40.217396    2101 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.288251    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a50da472-64e4-4d6f-8ef5-cb86341ccc6e-lib-modules\") pod \"kindnet-9gcl6\" (UID: \"a50da472-64e4-4d6f-8ef5-cb86341ccc6e\") " pod="kube-system/kindnet-9gcl6"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.288303    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a50da472-64e4-4d6f-8ef5-cb86341ccc6e-cni-cfg\") pod \"kindnet-9gcl6\" (UID: \"a50da472-64e4-4d6f-8ef5-cb86341ccc6e\") " pod="kube-system/kindnet-9gcl6"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.288335    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a50da472-64e4-4d6f-8ef5-cb86341ccc6e-xtables-lock\") pod \"kindnet-9gcl6\" (UID: \"a50da472-64e4-4d6f-8ef5-cb86341ccc6e\") " pod="kube-system/kindnet-9gcl6"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.288357    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltb8c\" (UniqueName: \"kubernetes.io/projected/a50da472-64e4-4d6f-8ef5-cb86341ccc6e-kube-api-access-ltb8c\") pod \"kindnet-9gcl6\" (UID: \"a50da472-64e4-4d6f-8ef5-cb86341ccc6e\") " pod="kube-system/kindnet-9gcl6"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.389982    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a839c553-2909-425b-854a-e85f73ec466b-xtables-lock\") pod \"kube-proxy-mtj7d\" (UID: \"a839c553-2909-425b-854a-e85f73ec466b\") " pod="kube-system/kube-proxy-mtj7d"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.390437    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq844\" (UniqueName: \"kubernetes.io/projected/a839c553-2909-425b-854a-e85f73ec466b-kube-api-access-zq844\") pod \"kube-proxy-mtj7d\" (UID: \"a839c553-2909-425b-854a-e85f73ec466b\") " pod="kube-system/kube-proxy-mtj7d"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.392603    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a839c553-2909-425b-854a-e85f73ec466b-kube-proxy\") pod \"kube-proxy-mtj7d\" (UID: \"a839c553-2909-425b-854a-e85f73ec466b\") " pod="kube-system/kube-proxy-mtj7d"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.392635    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a839c553-2909-425b-854a-e85f73ec466b-lib-modules\") pod \"kube-proxy-mtj7d\" (UID: \"a839c553-2909-425b-854a-e85f73ec466b\") " pod="kube-system/kube-proxy-mtj7d"
	Nov 23 09:03:41 no-preload-052851 kubelet[2101]: I1123 09:03:41.485391    2101 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 23 09:03:43 no-preload-052851 kubelet[2101]: I1123 09:03:43.753099    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mtj7d" podStartSLOduration=2.753074657 podStartE2EDuration="2.753074657s" podCreationTimestamp="2025-11-23 09:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:43.473714107 +0000 UTC m=+8.442762385" watchObservedRunningTime="2025-11-23 09:03:43.753074657 +0000 UTC m=+8.722122927"
	Nov 23 09:03:45 no-preload-052851 kubelet[2101]: I1123 09:03:45.865549    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9gcl6" podStartSLOduration=1.8602693000000001 podStartE2EDuration="4.865532147s" podCreationTimestamp="2025-11-23 09:03:41 +0000 UTC" firstStartedPulling="2025-11-23 09:03:41.913980689 +0000 UTC m=+6.883028967" lastFinishedPulling="2025-11-23 09:03:44.919243544 +0000 UTC m=+9.888291814" observedRunningTime="2025-11-23 09:03:45.47269693 +0000 UTC m=+10.441745216" watchObservedRunningTime="2025-11-23 09:03:45.865532147 +0000 UTC m=+10.834580417"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.696297    2101 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.930577    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srk47\" (UniqueName: \"kubernetes.io/projected/89dd4f3d-9f02-4d47-80e9-5a15ecf3073a-kube-api-access-srk47\") pod \"coredns-66bc5c9577-7gp9k\" (UID: \"89dd4f3d-9f02-4d47-80e9-5a15ecf3073a\") " pod="kube-system/coredns-66bc5c9577-7gp9k"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.930830    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndgt9\" (UniqueName: \"kubernetes.io/projected/1f31af32-ade0-48e9-b1c6-c8db55be2faa-kube-api-access-ndgt9\") pod \"storage-provisioner\" (UID: \"1f31af32-ade0-48e9-b1c6-c8db55be2faa\") " pod="kube-system/storage-provisioner"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.930960    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/1f31af32-ade0-48e9-b1c6-c8db55be2faa-tmp\") pod \"storage-provisioner\" (UID: \"1f31af32-ade0-48e9-b1c6-c8db55be2faa\") " pod="kube-system/storage-provisioner"
	Nov 23 09:03:55 no-preload-052851 kubelet[2101]: I1123 09:03:55.931074    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/89dd4f3d-9f02-4d47-80e9-5a15ecf3073a-config-volume\") pod \"coredns-66bc5c9577-7gp9k\" (UID: \"89dd4f3d-9f02-4d47-80e9-5a15ecf3073a\") " pod="kube-system/coredns-66bc5c9577-7gp9k"
	Nov 23 09:03:57 no-preload-052851 kubelet[2101]: I1123 09:03:57.572787    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7gp9k" podStartSLOduration=16.572768248 podStartE2EDuration="16.572768248s" podCreationTimestamp="2025-11-23 09:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:57.53829482 +0000 UTC m=+22.507343090" watchObservedRunningTime="2025-11-23 09:03:57.572768248 +0000 UTC m=+22.541816526"
	Nov 23 09:03:57 no-preload-052851 kubelet[2101]: I1123 09:03:57.597863    2101 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.597847169 podStartE2EDuration="14.597847169s" podCreationTimestamp="2025-11-23 09:03:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:03:57.597682852 +0000 UTC m=+22.566731130" watchObservedRunningTime="2025-11-23 09:03:57.597847169 +0000 UTC m=+22.566895447"
	Nov 23 09:04:00 no-preload-052851 kubelet[2101]: I1123 09:04:00.370963    2101 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kvwk\" (UniqueName: \"kubernetes.io/projected/0582e2dd-ee3f-4204-8ea2-0e7de31689f5-kube-api-access-7kvwk\") pod \"busybox\" (UID: \"0582e2dd-ee3f-4204-8ea2-0e7de31689f5\") " pod="default/busybox"
	
	
	==> storage-provisioner [67799c0024a713f10d401d68a9129c19a69e2f0f62f9812b12a2d10c9deab03f] <==
	W1123 09:03:56.879635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:03:56.879987       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:03:56.880810       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-052851_20e7a2b5-e2fe-43b0-95d7-9fa66275f583!
	I1123 09:03:56.880605       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"db9cb262-a470-4bae-817c-b27436922400", APIVersion:"v1", ResourceVersion:"450", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-052851_20e7a2b5-e2fe-43b0-95d7-9fa66275f583 became leader
	W1123 09:03:56.903412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:56.911629       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:03:56.985005       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-052851_20e7a2b5-e2fe-43b0-95d7-9fa66275f583!
	W1123 09:03:58.914636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:03:58.920226       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:00.923745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:00.931687       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:02.937494       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:02.942711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:04.946314       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:04.952291       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:06.956793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:06.962536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:08.965523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:08.976143       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:10.982203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:10.990613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:12.994232       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:13.003747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:15.019425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:04:15.049981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-052851 -n no-preload-052851
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-052851 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (15.94s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 10.09
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.11
9 TestDownloadOnly/v1.28.0/DeleteAll 0.24
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.2
12 TestDownloadOnly/v1.34.1/json-events 5.4
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.6
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 172.27
29 TestAddons/serial/Volcano 39.73
31 TestAddons/serial/GCPAuth/Namespaces 0.18
32 TestAddons/serial/GCPAuth/FakeCredentials 9.96
35 TestAddons/parallel/Registry 15.19
36 TestAddons/parallel/RegistryCreds 0.75
37 TestAddons/parallel/Ingress 20.14
38 TestAddons/parallel/InspektorGadget 11.74
39 TestAddons/parallel/MetricsServer 5.79
41 TestAddons/parallel/CSI 46.57
42 TestAddons/parallel/Headlamp 11.35
43 TestAddons/parallel/CloudSpanner 5.6
44 TestAddons/parallel/LocalPath 51.46
45 TestAddons/parallel/NvidiaDevicePlugin 6.55
46 TestAddons/parallel/Yakd 11.86
48 TestAddons/StoppedEnableDisable 12.37
49 TestCertOptions 39.11
50 TestCertExpiration 232.03
52 TestForceSystemdFlag 49.49
53 TestForceSystemdEnv 44.65
54 TestDockerEnvContainerd 49.91
58 TestErrorSpam/setup 32.41
59 TestErrorSpam/start 0.81
60 TestErrorSpam/status 1.09
61 TestErrorSpam/pause 1.78
62 TestErrorSpam/unpause 1.84
63 TestErrorSpam/stop 1.6
66 TestFunctional/serial/CopySyncFile 0.01
67 TestFunctional/serial/StartWithProxy 81.6
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 7.27
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.43
75 TestFunctional/serial/CacheCmd/cache/add_local 1.24
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.9
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
83 TestFunctional/serial/ExtraConfig 43.08
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.44
86 TestFunctional/serial/LogsFileCmd 1.42
87 TestFunctional/serial/InvalidService 4.78
89 TestFunctional/parallel/ConfigCmd 0.48
90 TestFunctional/parallel/DashboardCmd 7.35
91 TestFunctional/parallel/DryRun 0.5
92 TestFunctional/parallel/InternationalLanguage 0.23
93 TestFunctional/parallel/StatusCmd 1.42
97 TestFunctional/parallel/ServiceCmdConnect 8.65
98 TestFunctional/parallel/AddonsCmd 0.17
99 TestFunctional/parallel/PersistentVolumeClaim 24.32
101 TestFunctional/parallel/SSHCmd 0.93
102 TestFunctional/parallel/CpCmd 2.35
104 TestFunctional/parallel/FileSync 0.37
105 TestFunctional/parallel/CertSync 2.2
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.74
113 TestFunctional/parallel/License 0.36
114 TestFunctional/parallel/Version/short 0.09
115 TestFunctional/parallel/Version/components 1.42
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.66
118 TestFunctional/parallel/ImageCommands/ImageListShort 0.42
119 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
120 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
121 TestFunctional/parallel/ImageCommands/ImageListYaml 0.33
122 TestFunctional/parallel/ImageCommands/ImageBuild 4.28
123 TestFunctional/parallel/ImageCommands/Setup 0.7
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.53
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.46
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.24
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.31
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.33
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.67
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
134 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.08
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
140 TestFunctional/parallel/ServiceCmd/DeployApp 7.21
141 TestFunctional/parallel/ServiceCmd/List 0.53
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.39
144 TestFunctional/parallel/ServiceCmd/Format 0.4
145 TestFunctional/parallel/ServiceCmd/URL 0.43
146 TestFunctional/parallel/UpdateContextCmd/no_changes 0.17
147 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
148 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
149 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
150 TestFunctional/parallel/ProfileCmd/profile_list 0.45
151 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
152 TestFunctional/parallel/MountCmd/any-port 8.73
153 TestFunctional/parallel/MountCmd/specific-port 2.37
154 TestFunctional/parallel/MountCmd/VerifyCleanup 1.79
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.03
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 167.61
163 TestMultiControlPlane/serial/DeployApp 7.37
164 TestMultiControlPlane/serial/PingHostFromPods 1.59
165 TestMultiControlPlane/serial/AddWorkerNode 32.4
166 TestMultiControlPlane/serial/NodeLabels 0.12
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
168 TestMultiControlPlane/serial/CopyFile 21.3
169 TestMultiControlPlane/serial/StopSecondaryNode 13.01
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.85
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.86
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.28
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 99.45
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.43
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.8
176 TestMultiControlPlane/serial/StopCluster 36.32
177 TestMultiControlPlane/serial/RestartCluster 67.21
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.83
179 TestMultiControlPlane/serial/AddSecondaryNode 86.35
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.11
185 TestJSONOutput/start/Command 81.41
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.74
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.62
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.13
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 44.29
211 TestKicCustomNetwork/use_default_bridge_network 35.87
212 TestKicExistingNetwork 35.69
213 TestKicCustomSubnet 36.95
214 TestKicStaticIP 36.25
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 68.49
219 TestMountStart/serial/StartWithMountFirst 8.41
220 TestMountStart/serial/VerifyMountFirst 0.28
221 TestMountStart/serial/StartWithMountSecond 8.17
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.28
226 TestMountStart/serial/RestartStopped 7.48
227 TestMountStart/serial/VerifyMountPostStop 0.29
230 TestMultiNode/serial/FreshStart2Nodes 134.64
231 TestMultiNode/serial/DeployApp2Nodes 6.24
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 27.08
234 TestMultiNode/serial/MultiNodeLabels 0.1
235 TestMultiNode/serial/ProfileList 0.71
236 TestMultiNode/serial/CopyFile 10.9
237 TestMultiNode/serial/StopNode 2.44
238 TestMultiNode/serial/StartAfterStop 7.93
239 TestMultiNode/serial/RestartKeepsNodes 77.69
240 TestMultiNode/serial/DeleteNode 5.68
241 TestMultiNode/serial/StopMultiNode 24.09
242 TestMultiNode/serial/RestartMultiNode 49.71
243 TestMultiNode/serial/ValidateNameConflict 37.27
248 TestPreload 126.49
250 TestScheduledStopUnix 110.42
253 TestInsufficientStorage 12.71
254 TestRunningBinaryUpgrade 63.94
256 TestKubernetesUpgrade 354.32
257 TestMissingContainerUpgrade 142.32
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 41.98
261 TestNoKubernetes/serial/StartWithStopK8s 24.55
262 TestNoKubernetes/serial/Start 7.22
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
265 TestNoKubernetes/serial/ProfileList 0.7
266 TestNoKubernetes/serial/Stop 1.3
267 TestNoKubernetes/serial/StartNoArgs 6.83
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.38
269 TestStoppedBinaryUpgrade/Setup 0.82
270 TestStoppedBinaryUpgrade/Upgrade 52.99
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.41
280 TestPause/serial/Start 83.36
281 TestPause/serial/SecondStartNoReconfiguration 7.05
282 TestPause/serial/Pause 0.74
283 TestPause/serial/VerifyStatus 0.34
284 TestPause/serial/Unpause 0.77
285 TestPause/serial/PauseAgain 1.06
286 TestPause/serial/DeletePaused 2.95
287 TestPause/serial/VerifyDeletedResources 0.41
295 TestNetworkPlugins/group/false 5.38
300 TestStartStop/group/old-k8s-version/serial/FirstStart 60.07
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.23
303 TestStartStop/group/old-k8s-version/serial/Stop 12.19
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
305 TestStartStop/group/old-k8s-version/serial/SecondStart 48.15
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.09
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
309 TestStartStop/group/old-k8s-version/serial/Pause 3.11
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 87.68
313 TestStartStop/group/embed-certs/serial/FirstStart 89.77
316 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.12
317 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.16
318 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.05
319 TestStartStop/group/embed-certs/serial/Stop 12.17
320 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.18
321 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 55.61
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.34
323 TestStartStop/group/embed-certs/serial/SecondStart 55.11
324 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.1
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
327 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
328 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.26
329 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.21
331 TestStartStop/group/no-preload/serial/FirstStart 76.27
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.57
333 TestStartStop/group/embed-certs/serial/Pause 3.35
335 TestStartStop/group/newest-cni/serial/FirstStart 47.44
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.6
338 TestStartStop/group/newest-cni/serial/Stop 1.61
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.29
340 TestStartStop/group/newest-cni/serial/SecondStart 17.43
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
345 TestStartStop/group/newest-cni/serial/Pause 3.01
346 TestNetworkPlugins/group/auto/Start 88.56
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.41
348 TestStartStop/group/no-preload/serial/Stop 12.16
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.22
350 TestStartStop/group/no-preload/serial/SecondStart 57.35
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.14
353 TestNetworkPlugins/group/auto/KubeletFlags 0.33
354 TestNetworkPlugins/group/auto/NetCatPod 11.32
355 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.33
356 TestStartStop/group/no-preload/serial/Pause 3.56
357 TestNetworkPlugins/group/kindnet/Start 88.4
358 TestNetworkPlugins/group/auto/DNS 0.23
359 TestNetworkPlugins/group/auto/Localhost 0.18
360 TestNetworkPlugins/group/auto/HairPin 0.18
361 TestNetworkPlugins/group/calico/Start 64.33
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/calico/ControllerPod 6.01
364 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
365 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
366 TestNetworkPlugins/group/calico/KubeletFlags 0.38
367 TestNetworkPlugins/group/calico/NetCatPod 9.3
368 TestNetworkPlugins/group/kindnet/DNS 0.21
369 TestNetworkPlugins/group/kindnet/Localhost 0.16
370 TestNetworkPlugins/group/kindnet/HairPin 0.17
371 TestNetworkPlugins/group/calico/DNS 0.19
372 TestNetworkPlugins/group/calico/Localhost 0.16
373 TestNetworkPlugins/group/calico/HairPin 0.17
374 TestNetworkPlugins/group/custom-flannel/Start 61.25
375 TestNetworkPlugins/group/enable-default-cni/Start 82.51
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.29
378 TestNetworkPlugins/group/custom-flannel/DNS 0.17
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.16
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.45
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.34
383 TestNetworkPlugins/group/flannel/Start 65.99
384 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
385 TestNetworkPlugins/group/enable-default-cni/Localhost 0.19
386 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
387 TestNetworkPlugins/group/bridge/Start 74.68
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
390 TestNetworkPlugins/group/flannel/NetCatPod 9.48
391 TestNetworkPlugins/group/flannel/DNS 0.17
392 TestNetworkPlugins/group/flannel/Localhost 0.16
393 TestNetworkPlugins/group/flannel/HairPin 0.17
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.44
395 TestNetworkPlugins/group/bridge/NetCatPod 8.33
396 TestNetworkPlugins/group/bridge/DNS 0.21
397 TestNetworkPlugins/group/bridge/Localhost 0.14
398 TestNetworkPlugins/group/bridge/HairPin 0.14
x
+
TestDownloadOnly/v1.28.0/json-events (10.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-295249 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-295249 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (10.088589483s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (10.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 08:10:58.818343    4624 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1123 08:10:58.818420    4624 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.11s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-295249
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-295249: exit status 85 (106.151068ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-295249 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-295249 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:10:48
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:10:48.772126    4630 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:10:48.772324    4630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:48.772349    4630 out.go:374] Setting ErrFile to fd 2...
	I1123 08:10:48.772371    4630 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:48.772649    4630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	W1123 08:10:48.772824    4630 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21969-2811/.minikube/config/config.json: open /home/jenkins/minikube-integration/21969-2811/.minikube/config/config.json: no such file or directory
	I1123 08:10:48.773252    4630 out.go:368] Setting JSON to true
	I1123 08:10:48.774051    4630 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3201,"bootTime":1763882248,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:10:48.774141    4630 start.go:143] virtualization:  
	I1123 08:10:48.779929    4630 out.go:99] [download-only-295249] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1123 08:10:48.780142    4630 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 08:10:48.780248    4630 notify.go:221] Checking for updates...
	I1123 08:10:48.784922    4630 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:10:48.788361    4630 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:10:48.791596    4630 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:10:48.794714    4630 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:10:48.797863    4630 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 08:10:48.804119    4630 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 08:10:48.804428    4630 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:10:48.828935    4630 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:10:48.829029    4630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:10:49.236940    4630 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 08:10:49.227757529 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:10:49.237053    4630 docker.go:319] overlay module found
	I1123 08:10:49.240115    4630 out.go:99] Using the docker driver based on user configuration
	I1123 08:10:49.240151    4630 start.go:309] selected driver: docker
	I1123 08:10:49.240159    4630 start.go:927] validating driver "docker" against <nil>
	I1123 08:10:49.240261    4630 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:10:49.305179    4630 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-23 08:10:49.296006069 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:10:49.305329    4630 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:10:49.305621    4630 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 08:10:49.305782    4630 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:10:49.308893    4630 out.go:171] Using Docker driver with root privileges
	I1123 08:10:49.311708    4630 cni.go:84] Creating CNI manager for ""
	I1123 08:10:49.311770    4630 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:10:49.311784    4630 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:10:49.311861    4630 start.go:353] cluster config:
	{Name:download-only-295249 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-295249 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:10:49.314860    4630 out.go:99] Starting "download-only-295249" primary control-plane node in "download-only-295249" cluster
	I1123 08:10:49.314874    4630 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:10:49.317715    4630 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:10:49.317745    4630 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:10:49.317899    4630 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:10:49.333394    4630 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:10:49.333572    4630 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:10:49.333666    4630 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:10:49.378551    4630 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 08:10:49.378574    4630 cache.go:65] Caching tarball of preloaded images
	I1123 08:10:49.378748    4630 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:10:49.382085    4630 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 08:10:49.382112    4630 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1123 08:10:49.475877    4630 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1123 08:10:49.476004    4630 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1123 08:10:53.091694    4630 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1123 08:10:53.092069    4630 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/download-only-295249/config.json ...
	I1123 08:10:53.092103    4630 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/download-only-295249/config.json: {Name:mkcd4218bfca300bee3a7a00e6c5406a2e3e77ef Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:10:53.092283    4630 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1123 08:10:53.092469    4630 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21969-2811/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-295249 host does not exist
	  To start a cluster, run: "minikube start -p download-only-295249"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.11s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-295249
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (5.4s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-738140 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-738140 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (5.401693287s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (5.40s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 08:11:04.764799    4624 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1123 08:11:04.764837    4624 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-738140
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-738140: exit status 85 (87.598456ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-295249 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-295249 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
	│ delete  │ -p download-only-295249                                                                                                                                                               │ download-only-295249 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │ 23 Nov 25 08:10 UTC │
	│ start   │ -o=json --download-only -p download-only-738140 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-738140 │ jenkins │ v1.37.0 │ 23 Nov 25 08:10 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:10:59
	Running on machine: ip-172-31-24-2
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:10:59.412367    4823 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:10:59.412545    4823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:59.412574    4823 out.go:374] Setting ErrFile to fd 2...
	I1123 08:10:59.412597    4823 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:10:59.412906    4823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:10:59.413346    4823 out.go:368] Setting JSON to true
	I1123 08:10:59.414058    4823 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3212,"bootTime":1763882248,"procs":144,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:10:59.414145    4823 start.go:143] virtualization:  
	I1123 08:10:59.434981    4823 out.go:99] [download-only-738140] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:10:59.435241    4823 notify.go:221] Checking for updates...
	I1123 08:10:59.455866    4823 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:10:59.470387    4823 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:10:59.497266    4823 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:10:59.527019    4823 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:10:59.549141    4823 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1123 08:10:59.609762    4823 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 08:10:59.610039    4823 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:10:59.629500    4823 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:10:59.629598    4823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:10:59.699438    4823 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 08:10:59.689293111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:10:59.699544    4823 docker.go:319] overlay module found
	I1123 08:10:59.709275    4823 out.go:99] Using the docker driver based on user configuration
	I1123 08:10:59.709314    4823 start.go:309] selected driver: docker
	I1123 08:10:59.709322    4823 start.go:927] validating driver "docker" against <nil>
	I1123 08:10:59.709431    4823 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:10:59.773457    4823 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:47 SystemTime:2025-11-23 08:10:59.764583379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:10:59.773614    4823 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:10:59.773899    4823 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1123 08:10:59.774039    4823 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:10:59.785928    4823 out.go:171] Using Docker driver with root privileges
	I1123 08:10:59.795400    4823 cni.go:84] Creating CNI manager for ""
	I1123 08:10:59.795468    4823 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1123 08:10:59.795480    4823 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:10:59.795553    4823 start.go:353] cluster config:
	{Name:download-only-738140 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-738140 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:10:59.809671    4823 out.go:99] Starting "download-only-738140" primary control-plane node in "download-only-738140" cluster
	I1123 08:10:59.809703    4823 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1123 08:10:59.822194    4823 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:10:59.822241    4823 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:10:59.822397    4823 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:10:59.838484    4823 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:10:59.838623    4823 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:10:59.838639    4823 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 08:10:59.838644    4823 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 08:10:59.838650    4823 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 08:10:59.871714    4823 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:10:59.871742    4823 cache.go:65] Caching tarball of preloaded images
	I1123 08:10:59.871902    4823 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:10:59.891908    4823 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 08:10:59.891945    4823 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1123 08:10:59.972670    4823 preload.go:295] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1123 08:10:59.972723    4823 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21969-2811/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1123 08:11:04.155199    4823 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1123 08:11:04.155598    4823 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/download-only-738140/config.json ...
	I1123 08:11:04.155642    4823 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/download-only-738140/config.json: {Name:mk7ef29fd3a451456fd0fbe5a1b0a40fd73393f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:11:04.155841    4823 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1123 08:11:04.155999    4823 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21969-2811/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-738140 host does not exist
	  To start a cluster, run: "minikube start -p download-only-738140"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-738140
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 08:11:05.918444    4624 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-039750 --alsologtostderr --binary-mirror http://127.0.0.1:45803 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-039750" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-039750
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-698781
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-698781: exit status 85 (73.305705ms)

                                                
                                                
-- stdout --
	* Profile "addons-698781" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-698781"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-698781
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-698781: exit status 85 (69.925097ms)

                                                
                                                
-- stdout --
	* Profile "addons-698781" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-698781"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (172.27s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-698781 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-698781 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m52.264057738s)
--- PASS: TestAddons/Setup (172.27s)

                                                
                                    
x
+
TestAddons/serial/Volcano (39.73s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 71.400967ms
addons_test.go:884: volcano-controller stabilized in 71.979493ms
addons_test.go:868: volcano-scheduler stabilized in 72.653428ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-bpjld" [416ca349-fa79-4104-9453-72b3f64d0999] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.004109395s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-hf9z2" [150d65c9-3436-4991-a1dd-dbca49443c05] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.006692396s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-2mq5z" [9d64592a-e4ac-4711-9a24-000d6f6079fa] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.00593118s
addons_test.go:903: (dbg) Run:  kubectl --context addons-698781 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-698781 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-698781 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [43780f20-3dc0-4042-ae6e-56f44e2345ab] Pending
helpers_test.go:352: "test-job-nginx-0" [43780f20-3dc0-4042-ae6e-56f44e2345ab] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [43780f20-3dc0-4042-ae6e-56f44e2345ab] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 11.004354839s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-698781 addons disable volcano --alsologtostderr -v=1: (11.958212954s)
--- PASS: TestAddons/serial/Volcano (39.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-698781 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-698781 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.18s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-698781 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-698781 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e30c20b9-abd7-4ecd-95d7-fd53a6d7fea7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e30c20b9-abd7-4ecd-95d7-fd53a6d7fea7] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003318301s
addons_test.go:694: (dbg) Run:  kubectl --context addons-698781 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-698781 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-698781 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-698781 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.96s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.19s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.946614ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-zbj9w" [9f97d3fc-f148-48b6-9f82-52fb98d4d17b] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003697904s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-j8fxv" [d33cdcbc-21c8-45cd-8674-b9a6300f921f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003566141s
addons_test.go:392: (dbg) Run:  kubectl --context addons-698781 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-698781 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-698781 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.024860816s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 ip
2025/11/23 08:15:12 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.19s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.756522ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-698781
addons_test.go:332: (dbg) Run:  kubectl --context addons-698781 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.75s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-698781 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-698781 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-698781 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [9f129396-06e0-4852-8f30-4fdaf50a534c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [9f129396-06e0-4852-8f30-4fdaf50a534c] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003071792s
I1123 08:16:28.654465    4624 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-698781 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-698781 addons disable ingress-dns --alsologtostderr -v=1: (1.660780242s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-698781 addons disable ingress --alsologtostderr -v=1: (7.809514619s)
--- PASS: TestAddons/parallel/Ingress (20.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-l7lkr" [df4d5a4f-51c7-41be-89b9-1ef8bf90cf7a] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003603458s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-698781 addons disable inspektor-gadget --alsologtostderr -v=1: (5.738780482s)
--- PASS: TestAddons/parallel/InspektorGadget (11.74s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.79s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.801191ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-846zk" [e44a9fbc-5172-42a7-9e19-98e3c4258f9c] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.005592174s
addons_test.go:463: (dbg) Run:  kubectl --context addons-698781 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 08:15:09.768592    4624 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 08:15:09.771842    4624 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 08:15:09.771867    4624 kapi.go:107] duration metric: took 6.087469ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.09757ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-698781 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-698781 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [dffbe3a2-65cf-47e7-a782-eb972b19befe] Pending
helpers_test.go:352: "task-pv-pod" [dffbe3a2-65cf-47e7-a782-eb972b19befe] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [dffbe3a2-65cf-47e7-a782-eb972b19befe] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.005038926s
addons_test.go:572: (dbg) Run:  kubectl --context addons-698781 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-698781 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-698781 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-698781 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-698781 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-698781 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-698781 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [2a6fa3c3-cf68-4ba7-b348-c89c74f54e72] Pending
helpers_test.go:352: "task-pv-pod-restore" [2a6fa3c3-cf68-4ba7-b348-c89c74f54e72] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [2a6fa3c3-cf68-4ba7-b348-c89c74f54e72] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005612961s
addons_test.go:614: (dbg) Run:  kubectl --context addons-698781 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-698781 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-698781 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-698781 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.898245732s)
--- PASS: TestAddons/parallel/CSI (46.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (11.35s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-698781 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-698781 --alsologtostderr -v=1: (1.025104312s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-dfcdc64b-r8hpp" [ccd38d19-99fa-4008-8424-00280b057012] Pending
helpers_test.go:352: "headlamp-dfcdc64b-r8hpp" [ccd38d19-99fa-4008-8424-00280b057012] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-r8hpp" [ccd38d19-99fa-4008-8424-00280b057012] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-dfcdc64b-r8hpp" [ccd38d19-99fa-4008-8424-00280b057012] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003730701s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable headlamp --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Headlamp (11.35s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-z4g2s" [10a036a0-4d0f-4d4e-a792-95b15edd7448] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003214131s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.46s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-698781 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-698781 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-698781 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [e0d180c1-eb4b-470c-948d-07db13044f15] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [e0d180c1-eb4b-470c-948d-07db13044f15] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [e0d180c1-eb4b-470c-948d-07db13044f15] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003128953s
addons_test.go:967: (dbg) Run:  kubectl --context addons-698781 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 ssh "cat /opt/local-path-provisioner/pvc-f60bf357-0ce8-490b-abf4-501822eaf84d_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-698781 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-698781 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-698781 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.1511473s)
--- PASS: TestAddons/parallel/LocalPath (51.46s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6kzt5" [bfdc8c7c-4f80-414e-9269-0e8dab931690] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004204495s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.55s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-68tm5" [9da6cc7b-95f7-4d9e-853f-324fed469c05] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003172212s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-698781 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-698781 addons disable yakd --alsologtostderr -v=1: (5.856275122s)
--- PASS: TestAddons/parallel/Yakd (11.86s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.37s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-698781
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-698781: (12.096613133s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-698781
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-698781
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-698781
--- PASS: TestAddons/StoppedEnableDisable (12.37s)

                                                
                                    
x
+
TestCertOptions (39.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-886452 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-886452 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (36.258985524s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-886452 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-886452 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-886452 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-886452" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-886452
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-886452: (2.121104344s)
--- PASS: TestCertOptions (39.11s)

                                                
                                    
x
+
TestCertExpiration (232.03s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-918102 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
E1123 08:56:02.172583    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-918102 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (40.761558728s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-918102 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-918102 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.280895053s)
helpers_test.go:175: Cleaning up "cert-expiration-918102" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-918102
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-918102: (2.988348507s)
--- PASS: TestCertExpiration (232.03s)

                                                
                                    
x
+
TestForceSystemdFlag (49.49s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-964934 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-964934 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (46.617543048s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-964934 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-964934" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-964934
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-964934: (2.522702843s)
--- PASS: TestForceSystemdFlag (49.49s)

                                                
                                    
x
+
TestForceSystemdEnv (44.65s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-023309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-023309 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.06122043s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-023309 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-023309" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-023309
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-023309: (3.064824415s)
--- PASS: TestForceSystemdEnv (44.65s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.91s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-283451 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-283451 --driver=docker  --container-runtime=containerd: (33.728694526s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-283451"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-283451": (1.086891106s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-psn6oT3ub8IQ/agent.24079" SSH_AGENT_PID="24080" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-psn6oT3ub8IQ/agent.24079" SSH_AGENT_PID="24080" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-psn6oT3ub8IQ/agent.24079" SSH_AGENT_PID="24080" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.188704172s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-psn6oT3ub8IQ/agent.24079" SSH_AGENT_PID="24080" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-283451" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-283451
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-283451: (2.43709452s)
--- PASS: TestDockerEnvContainerd (49.91s)

                                                
                                    
x
+
TestErrorSpam/setup (32.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-199569 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-199569 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-199569 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-199569 --driver=docker  --container-runtime=containerd: (32.413812485s)
--- PASS: TestErrorSpam/setup (32.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.81s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 start --dry-run
--- PASS: TestErrorSpam/start (0.81s)

                                                
                                    
x
+
TestErrorSpam/status (1.09s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 status
--- PASS: TestErrorSpam/status (1.09s)

                                                
                                    
x
+
TestErrorSpam/pause (1.78s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 pause
--- PASS: TestErrorSpam/pause (1.78s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.84s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 unpause
--- PASS: TestErrorSpam/unpause (1.84s)

                                                
                                    
x
+
TestErrorSpam/stop (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 stop: (1.395257623s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-199569 --log_dir /tmp/nospam-199569 stop
--- PASS: TestErrorSpam/stop (1.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21969-2811/.minikube/files/etc/test/nested/copy/4624/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.01s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.6s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-177240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1123 08:18:58.901323    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:58.907637    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:58.918976    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:58.940294    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:58.981605    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:59.062935    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:59.224354    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:18:59.545951    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:00.194448    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:01.475834    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:04.037967    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:09.159743    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:19.403465    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:19:39.885727    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-177240 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m21.599850938s)
--- PASS: TestFunctional/serial/StartWithProxy (81.60s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (7.27s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 08:19:53.500680    4624 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-177240 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-177240 --alsologtostderr -v=8: (7.271579331s)
functional_test.go:678: soft start took 7.273318436s for "functional-177240" cluster.
I1123 08:20:00.772657    4624 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (7.27s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-177240 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-177240 cache add registry.k8s.io/pause:3.1: (1.284061562s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-177240 cache add registry.k8s.io/pause:3.3: (1.112893724s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-177240 cache add registry.k8s.io/pause:latest: (1.037119405s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.43s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-177240 /tmp/TestFunctionalserialCacheCmdcacheadd_local1466243457/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 cache add minikube-local-cache-test:functional-177240
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 cache delete minikube-local-cache-test:functional-177240
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-177240
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.9s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-177240 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.582633ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.90s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 kubectl -- --context functional-177240 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-177240 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-177240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 08:20:20.848539    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-177240 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.081471707s)
functional_test.go:776: restart took 43.081580279s for "functional-177240" cluster.
I1123 08:20:51.412830    4624 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (43.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-177240 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-177240 logs: (1.434999139s)
--- PASS: TestFunctional/serial/LogsCmd (1.44s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 logs --file /tmp/TestFunctionalserialLogsFileCmd2477786795/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-177240 logs --file /tmp/TestFunctionalserialLogsFileCmd2477786795/001/logs.txt: (1.421792564s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.42s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.78s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-177240 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-177240
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-177240: exit status 115 (457.607048ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32452 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-177240 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-177240 delete -f testdata/invalidsvc.yaml: (1.077138454s)
--- PASS: TestFunctional/serial/InvalidService (4.78s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-177240 config get cpus: exit status 14 (88.416196ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-177240 config get cpus: exit status 14 (81.154427ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-177240 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-177240 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 40409: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (7.35s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-177240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-177240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (242.2492ms)

                                                
                                                
-- stdout --
	* [functional-177240] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:21:35.540851   40085 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:21:35.541000   40085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:21:35.541025   40085 out.go:374] Setting ErrFile to fd 2...
	I1123 08:21:35.541042   40085 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:21:35.541322   40085 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:21:35.541754   40085 out.go:368] Setting JSON to false
	I1123 08:21:35.542667   40085 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3848,"bootTime":1763882248,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:21:35.542748   40085 start.go:143] virtualization:  
	I1123 08:21:35.546539   40085 out.go:179] * [functional-177240] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:21:35.549555   40085 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:21:35.549633   40085 notify.go:221] Checking for updates...
	I1123 08:21:35.555322   40085 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:21:35.558349   40085 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:21:35.561753   40085 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:21:35.564674   40085 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:21:35.567515   40085 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:21:35.570818   40085 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:21:35.571493   40085 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:21:35.607605   40085 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:21:35.607708   40085 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:21:35.701618   40085 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 08:21:35.690269577 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:21:35.701725   40085 docker.go:319] overlay module found
	I1123 08:21:35.705769   40085 out.go:179] * Using the docker driver based on existing profile
	I1123 08:21:35.708909   40085 start.go:309] selected driver: docker
	I1123 08:21:35.708928   40085 start.go:927] validating driver "docker" against &{Name:functional-177240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-177240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:21:35.709039   40085 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:21:35.712817   40085 out.go:203] 
	W1123 08:21:35.715683   40085 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 08:21:35.718451   40085 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-177240 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-177240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-177240 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (233.443158ms)

                                                
                                                
-- stdout --
	* [functional-177240] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:21:35.337462   40038 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:21:35.337635   40038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:21:35.337647   40038 out.go:374] Setting ErrFile to fd 2...
	I1123 08:21:35.337652   40038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:21:35.338996   40038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:21:35.339439   40038 out.go:368] Setting JSON to false
	I1123 08:21:35.340529   40038 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":3848,"bootTime":1763882248,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:21:35.340600   40038 start.go:143] virtualization:  
	I1123 08:21:35.343987   40038 out.go:179] * [functional-177240] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1123 08:21:35.347088   40038 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:21:35.347190   40038 notify.go:221] Checking for updates...
	I1123 08:21:35.353398   40038 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:21:35.356302   40038 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:21:35.359286   40038 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:21:35.362149   40038 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:21:35.365022   40038 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:21:35.368346   40038 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:21:35.368935   40038 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:21:35.397949   40038 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:21:35.398050   40038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:21:35.460904   40038 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-23 08:21:35.452011701 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:21:35.461007   40038 docker.go:319] overlay module found
	I1123 08:21:35.464187   40038 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 08:21:35.466998   40038 start.go:309] selected driver: docker
	I1123 08:21:35.467035   40038 start.go:927] validating driver "docker" against &{Name:functional-177240 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-177240 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:21:35.467146   40038 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:21:35.471066   40038 out.go:203] 
	W1123 08:21:35.474189   40038 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 08:21:35.477105   40038 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-177240 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-177240 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-dgmb4" [a0b30f24-e132-4339-91aa-ab2ac9d2c8e2] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-dgmb4" [a0b30f24-e132-4339-91aa-ab2ac9d2c8e2] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.003008317s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:32733
functional_test.go:1680: http://192.168.49.2:32733: success! body:
Request served by hello-node-connect-7d85dfc575-dgmb4

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:32733
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.65s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [bd86026a-c9f3-4b9b-8f00-3fea3de0c6f7] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00380685s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-177240 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-177240 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-177240 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-177240 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d359528f-2e35-4c70-86b6-bea8463da697] Pending
helpers_test.go:352: "sp-pod" [d359528f-2e35-4c70-86b6-bea8463da697] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d359528f-2e35-4c70-86b6-bea8463da697] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003569469s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-177240 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-177240 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-177240 delete -f testdata/storage-provisioner/pod.yaml: (1.251466847s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-177240 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [dbaa463e-15e2-4523-9827-c59f93e5f2d8] Pending
helpers_test.go:352: "sp-pod" [dbaa463e-15e2-4523-9827-c59f93e5f2d8] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.003491985s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-177240 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.32s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh -n functional-177240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 cp functional-177240:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1972658139/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh -n functional-177240 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh -n functional-177240 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4624/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo cat /etc/test/nested/copy/4624/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4624.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo cat /etc/ssl/certs/4624.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4624.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo cat /usr/share/ca-certificates/4624.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/46242.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo cat /etc/ssl/certs/46242.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/46242.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo cat /usr/share/ca-certificates/46242.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-177240 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-177240 ssh "sudo systemctl is-active docker": exit status 1 (367.946966ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-177240 ssh "sudo systemctl is-active crio": exit status 1 (367.619183ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 version --short
--- PASS: TestFunctional/parallel/Version/short (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-177240 version -o=json --components: (1.421978819s)
--- PASS: TestFunctional/parallel/Version/components (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-177240 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-177240 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-177240 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-177240 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 36557: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-177240 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-177240
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-177240
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-177240 image ls --format short --alsologtostderr:
I1123 08:21:44.913773   41678 out.go:360] Setting OutFile to fd 1 ...
I1123 08:21:44.913962   41678 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:44.913983   41678 out.go:374] Setting ErrFile to fd 2...
I1123 08:21:44.914004   41678 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:44.914400   41678 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
I1123 08:21:44.915398   41678 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:44.915652   41678 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:44.916833   41678 cli_runner.go:164] Run: docker container inspect functional-177240 --format={{.State.Status}}
I1123 08:21:44.954311   41678 ssh_runner.go:195] Run: systemctl --version
I1123 08:21:44.954373   41678 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-177240
I1123 08:21:44.980382   41678 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/functional-177240/id_rsa Username:docker}
I1123 08:21:45.196615   41678 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-177240 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kicbase/echo-server               │ functional-177240  │ sha256:ce2d2c │ 2.17MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:ce2d2c │ 2.17MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/minikube-local-cache-test │ functional-177240  │ sha256:c8e16f │ 991B   │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-177240 image ls --format table --alsologtostderr:
I1123 08:21:46.172748   42023 out.go:360] Setting OutFile to fd 1 ...
I1123 08:21:46.173295   42023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:46.173339   42023 out.go:374] Setting ErrFile to fd 2...
I1123 08:21:46.173370   42023 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:46.173679   42023 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
I1123 08:21:46.174403   42023 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:46.174865   42023 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:46.175660   42023 cli_runner.go:164] Run: docker container inspect functional-177240 --format={{.State.Status}}
I1123 08:21:46.199270   42023 ssh_runner.go:195] Run: systemctl --version
I1123 08:21:46.199324   42023 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-177240
I1123 08:21:46.225525   42023 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/functional-177240/id_rsa Username:docker}
I1123 08:21:46.340640   42023 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-177240 image ls --format json --alsologtostderr:
[{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:c8e16f9b5dc459b27739d53fe278f0a956872592dbf97de1cd8085a9c86e1876","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-177240"],"size":"991"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":[
"docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a92
36d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-177240","docker.io/kicbase/echo-server:latest"],"size":"2173567"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14
dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-contr
oller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-177240 image ls --format json --alsologtostderr:
I1123 08:21:45.909406   41956 out.go:360] Setting OutFile to fd 1 ...
I1123 08:21:45.909532   41956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:45.909542   41956 out.go:374] Setting ErrFile to fd 2...
I1123 08:21:45.909547   41956 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:45.909809   41956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
I1123 08:21:45.910402   41956 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:45.910524   41956 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:45.911027   41956 cli_runner.go:164] Run: docker container inspect functional-177240 --format={{.State.Status}}
I1123 08:21:45.934003   41956 ssh_runner.go:195] Run: systemctl --version
I1123 08:21:45.934053   41956 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-177240
I1123 08:21:45.964230   41956 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/functional-177240/id_rsa Username:docker}
I1123 08:21:46.074974   41956 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-177240 image ls --format yaml --alsologtostderr:
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-177240
- docker.io/kicbase/echo-server:latest
size: "2173567"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:c8e16f9b5dc459b27739d53fe278f0a956872592dbf97de1cd8085a9c86e1876
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-177240
size: "991"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-177240 image ls --format yaml --alsologtostderr:
I1123 08:21:45.344624   41790 out.go:360] Setting OutFile to fd 1 ...
I1123 08:21:45.344830   41790 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:45.344845   41790 out.go:374] Setting ErrFile to fd 2...
I1123 08:21:45.344851   41790 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:45.345150   41790 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
I1123 08:21:45.346171   41790 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:45.346445   41790 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:45.347685   41790 cli_runner.go:164] Run: docker container inspect functional-177240 --format={{.State.Status}}
I1123 08:21:45.387176   41790 ssh_runner.go:195] Run: systemctl --version
I1123 08:21:45.387221   41790 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-177240
I1123 08:21:45.422559   41790 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/functional-177240/id_rsa Username:docker}
I1123 08:21:45.534977   41790 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-177240 ssh pgrep buildkitd: exit status 1 (352.559843ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image build -t localhost/my-image:functional-177240 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-177240 image build -t localhost/my-image:functional-177240 testdata/build --alsologtostderr: (3.701918474s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-177240 image build -t localhost/my-image:functional-177240 testdata/build --alsologtostderr:
I1123 08:21:46.017286   41977 out.go:360] Setting OutFile to fd 1 ...
I1123 08:21:46.017755   41977 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:46.017763   41977 out.go:374] Setting ErrFile to fd 2...
I1123 08:21:46.017768   41977 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:21:46.018229   41977 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
I1123 08:21:46.022502   41977 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:46.029535   41977 config.go:182] Loaded profile config "functional-177240": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1123 08:21:46.030108   41977 cli_runner.go:164] Run: docker container inspect functional-177240 --format={{.State.Status}}
I1123 08:21:46.049992   41977 ssh_runner.go:195] Run: systemctl --version
I1123 08:21:46.050044   41977 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-177240
I1123 08:21:46.068684   41977 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/functional-177240/id_rsa Username:docker}
I1123 08:21:46.191974   41977 build_images.go:162] Building image from path: /tmp/build.3261651938.tar
I1123 08:21:46.192053   41977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 08:21:46.201606   41977 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3261651938.tar
I1123 08:21:46.206512   41977 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3261651938.tar: stat -c "%s %y" /var/lib/minikube/build/build.3261651938.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3261651938.tar': No such file or directory
I1123 08:21:46.206541   41977 ssh_runner.go:362] scp /tmp/build.3261651938.tar --> /var/lib/minikube/build/build.3261651938.tar (3072 bytes)
I1123 08:21:46.231945   41977 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3261651938
I1123 08:21:46.245510   41977 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3261651938 -xf /var/lib/minikube/build/build.3261651938.tar
I1123 08:21:46.254945   41977 containerd.go:394] Building image: /var/lib/minikube/build/build.3261651938
I1123 08:21:46.255027   41977 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3261651938 --local dockerfile=/var/lib/minikube/build/build.3261651938 --output type=image,name=localhost/my-image:functional-177240
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.7s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:46e51b081f38a56c98a5f86267bf2355c2e7ecba716e770dee8141e56745fbb5
#8 exporting manifest sha256:46e51b081f38a56c98a5f86267bf2355c2e7ecba716e770dee8141e56745fbb5 0.0s done
#8 exporting config sha256:f47790bd1dc4c2a2f69bb8413a7611c6a4fd1902dea32a96d0bd27aad7cb52ed 0.0s done
#8 naming to localhost/my-image:functional-177240 done
#8 DONE 0.2s
I1123 08:21:49.617995   41977 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.3261651938 --local dockerfile=/var/lib/minikube/build/build.3261651938 --output type=image,name=localhost/my-image:functional-177240: (3.362939571s)
I1123 08:21:49.618067   41977 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3261651938
I1123 08:21:49.626331   41977 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3261651938.tar
I1123 08:21:49.634339   41977 build_images.go:218] Built localhost/my-image:functional-177240 from /tmp/build.3261651938.tar
I1123 08:21:49.634369   41977 build_images.go:134] succeeded building to: functional-177240
I1123 08:21:49.634388   41977 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (4.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-177240
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-177240 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-177240 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [2dd42b0d-3182-47a5-8558-387fecf1b943] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [2dd42b0d-3182-47a5-8558-387fecf1b943] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.002832395s
I1123 08:21:11.700516    4624 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image load --daemon kicbase/echo-server:functional-177240 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-177240 image load --daemon kicbase/echo-server:functional-177240 --alsologtostderr: (1.164566982s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image load --daemon kicbase/echo-server:functional-177240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-177240
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image load --daemon kicbase/echo-server:functional-177240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image save kicbase/echo-server:functional-177240 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image rm kicbase/echo-server:functional-177240 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-177240
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 image save --daemon kicbase/echo-server:functional-177240 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-177240
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-177240 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.102.193.156 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-177240 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-177240 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-177240 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-ct5nb" [b783e895-19f0-47c3-9815-ddcd1a6601c4] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-ct5nb" [b783e895-19f0-47c3-9815-ddcd1a6601c4] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003707441s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.21s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 service list -o json
functional_test.go:1504: Took "524.168528ms" to run "out/minikube-linux-arm64 -p functional-177240 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:32740
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32740
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "382.816867ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "66.701307ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "397.882959ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "54.20642ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdany-port4091616920/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763886092964682401" to /tmp/TestFunctionalparallelMountCmdany-port4091616920/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763886092964682401" to /tmp/TestFunctionalparallelMountCmdany-port4091616920/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763886092964682401" to /tmp/TestFunctionalparallelMountCmdany-port4091616920/001/test-1763886092964682401
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-177240 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (450.082645ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:21:33.418589    4624 retry.go:31] will retry after 428.827603ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 08:21 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 08:21 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 08:21 test-1763886092964682401
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh cat /mount-9p/test-1763886092964682401
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-177240 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [750ba7ac-1832-4cf1-b551-952dca52b0b6] Pending
helpers_test.go:352: "busybox-mount" [750ba7ac-1832-4cf1-b551-952dca52b0b6] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [750ba7ac-1832-4cf1-b551-952dca52b0b6] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [750ba7ac-1832-4cf1-b551-952dca52b0b6] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003350037s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-177240 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdany-port4091616920/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdspecific-port2309576130/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-177240 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (541.387648ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:21:42.236907    4624 retry.go:31] will retry after 675.379119ms: exit status 1
E1123 08:21:42.770131    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "findmnt -T /mount-9p | grep 9p"
2025/11/23 08:21:43 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdspecific-port2309576130/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-177240 ssh "sudo umount -f /mount-9p": exit status 1 (330.092329ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-177240 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdspecific-port2309576130/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.37s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1403655606/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1403655606/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1403655606/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-177240 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-177240 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1403655606/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1403655606/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-177240 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1403655606/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.79s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-177240
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-177240
--- PASS: TestFunctional/delete_my-image_image (0.03s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-177240
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (167.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1123 08:23:58.897636    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:24:26.612016    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (2m46.69618815s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (167.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 kubectl -- rollout status deployment/busybox: (4.231737497s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-46hnb -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-8q86k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-xsqnl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-46hnb -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-8q86k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-xsqnl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-46hnb -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-8q86k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-xsqnl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-46hnb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-46hnb -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-8q86k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-8q86k -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-xsqnl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 kubectl -- exec busybox-7b57f96db7-xsqnl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 node add --alsologtostderr -v 5: (31.323259169s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5: (1.07923818s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-990595 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.084483189s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 status --output json --alsologtostderr -v 5: (1.103947379s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp testdata/cp-test.txt ha-990595:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3010890071/001/cp-test_ha-990595.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595:/home/docker/cp-test.txt ha-990595-m02:/home/docker/cp-test_ha-990595_ha-990595-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m02 "sudo cat /home/docker/cp-test_ha-990595_ha-990595-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595:/home/docker/cp-test.txt ha-990595-m03:/home/docker/cp-test_ha-990595_ha-990595-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m03 "sudo cat /home/docker/cp-test_ha-990595_ha-990595-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595:/home/docker/cp-test.txt ha-990595-m04:/home/docker/cp-test_ha-990595_ha-990595-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m04 "sudo cat /home/docker/cp-test_ha-990595_ha-990595-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp testdata/cp-test.txt ha-990595-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3010890071/001/cp-test_ha-990595-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m02:/home/docker/cp-test.txt ha-990595:/home/docker/cp-test_ha-990595-m02_ha-990595.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595 "sudo cat /home/docker/cp-test_ha-990595-m02_ha-990595.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m02:/home/docker/cp-test.txt ha-990595-m03:/home/docker/cp-test_ha-990595-m02_ha-990595-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m03 "sudo cat /home/docker/cp-test_ha-990595-m02_ha-990595-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m02:/home/docker/cp-test.txt ha-990595-m04:/home/docker/cp-test_ha-990595-m02_ha-990595-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m04 "sudo cat /home/docker/cp-test_ha-990595-m02_ha-990595-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp testdata/cp-test.txt ha-990595-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3010890071/001/cp-test_ha-990595-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m03:/home/docker/cp-test.txt ha-990595:/home/docker/cp-test_ha-990595-m03_ha-990595.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595 "sudo cat /home/docker/cp-test_ha-990595-m03_ha-990595.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m03:/home/docker/cp-test.txt ha-990595-m02:/home/docker/cp-test_ha-990595-m03_ha-990595-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m02 "sudo cat /home/docker/cp-test_ha-990595-m03_ha-990595-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m03:/home/docker/cp-test.txt ha-990595-m04:/home/docker/cp-test_ha-990595-m03_ha-990595-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m04 "sudo cat /home/docker/cp-test_ha-990595-m03_ha-990595-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp testdata/cp-test.txt ha-990595-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3010890071/001/cp-test_ha-990595-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m04:/home/docker/cp-test.txt ha-990595:/home/docker/cp-test_ha-990595-m04_ha-990595.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595 "sudo cat /home/docker/cp-test_ha-990595-m04_ha-990595.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m04:/home/docker/cp-test.txt ha-990595-m02:/home/docker/cp-test_ha-990595-m04_ha-990595-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m02 "sudo cat /home/docker/cp-test_ha-990595-m04_ha-990595-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 cp ha-990595-m04:/home/docker/cp-test.txt ha-990595-m03:/home/docker/cp-test_ha-990595-m04_ha-990595-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 ssh -n ha-990595-m03 "sudo cat /home/docker/cp-test_ha-990595-m04_ha-990595-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 node stop m02 --alsologtostderr -v 5: (12.166628587s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5: exit status 7 (842.745324ms)

                                                
                                                
-- stdout --
	ha-990595
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-990595-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-990595-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-990595-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:25:56.383290   58528 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:25:56.383534   58528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:25:56.383548   58528 out.go:374] Setting ErrFile to fd 2...
	I1123 08:25:56.383554   58528 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:25:56.383873   58528 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:25:56.384112   58528 out.go:368] Setting JSON to false
	I1123 08:25:56.384155   58528 mustload.go:66] Loading cluster: ha-990595
	I1123 08:25:56.384250   58528 notify.go:221] Checking for updates...
	I1123 08:25:56.384700   58528 config.go:182] Loaded profile config "ha-990595": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:25:56.384720   58528 status.go:174] checking status of ha-990595 ...
	I1123 08:25:56.385312   58528 cli_runner.go:164] Run: docker container inspect ha-990595 --format={{.State.Status}}
	I1123 08:25:56.404318   58528 status.go:371] ha-990595 host status = "Running" (err=<nil>)
	I1123 08:25:56.404341   58528 host.go:66] Checking if "ha-990595" exists ...
	I1123 08:25:56.404647   58528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-990595
	I1123 08:25:56.444314   58528 host.go:66] Checking if "ha-990595" exists ...
	I1123 08:25:56.445824   58528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:25:56.445883   58528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-990595
	I1123 08:25:56.468789   58528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/ha-990595/id_rsa Username:docker}
	I1123 08:25:56.573572   58528 ssh_runner.go:195] Run: systemctl --version
	I1123 08:25:56.581658   58528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:25:56.595004   58528 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:25:56.659940   58528 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-23 08:25:56.649655353 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:25:56.660565   58528 kubeconfig.go:125] found "ha-990595" server: "https://192.168.49.254:8443"
	I1123 08:25:56.660598   58528 api_server.go:166] Checking apiserver status ...
	I1123 08:25:56.660661   58528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:25:56.674241   58528 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1388/cgroup
	I1123 08:25:56.682896   58528 api_server.go:182] apiserver freezer: "3:freezer:/docker/83f141bf048bb0492fc7b5451d282fbe66ab03114f9fd7847e5a1462b7554200/kubepods/burstable/pod086340978a457166feaca32e55b3cb19/c71e8607df6b091bba23d7185f5cd25947dc133a3bea99a183eacf7d7a9433f4"
	I1123 08:25:56.682966   58528 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/83f141bf048bb0492fc7b5451d282fbe66ab03114f9fd7847e5a1462b7554200/kubepods/burstable/pod086340978a457166feaca32e55b3cb19/c71e8607df6b091bba23d7185f5cd25947dc133a3bea99a183eacf7d7a9433f4/freezer.state
	I1123 08:25:56.690923   58528 api_server.go:204] freezer state: "THAWED"
	I1123 08:25:56.690954   58528 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:25:56.699339   58528 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:25:56.699413   58528 status.go:463] ha-990595 apiserver status = Running (err=<nil>)
	I1123 08:25:56.699426   58528 status.go:176] ha-990595 status: &{Name:ha-990595 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:25:56.699443   58528 status.go:174] checking status of ha-990595-m02 ...
	I1123 08:25:56.699765   58528 cli_runner.go:164] Run: docker container inspect ha-990595-m02 --format={{.State.Status}}
	I1123 08:25:56.721295   58528 status.go:371] ha-990595-m02 host status = "Stopped" (err=<nil>)
	I1123 08:25:56.721316   58528 status.go:384] host is not running, skipping remaining checks
	I1123 08:25:56.721322   58528 status.go:176] ha-990595-m02 status: &{Name:ha-990595-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:25:56.721347   58528 status.go:174] checking status of ha-990595-m03 ...
	I1123 08:25:56.721661   58528 cli_runner.go:164] Run: docker container inspect ha-990595-m03 --format={{.State.Status}}
	I1123 08:25:56.746868   58528 status.go:371] ha-990595-m03 host status = "Running" (err=<nil>)
	I1123 08:25:56.746892   58528 host.go:66] Checking if "ha-990595-m03" exists ...
	I1123 08:25:56.747200   58528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-990595-m03
	I1123 08:25:56.765809   58528 host.go:66] Checking if "ha-990595-m03" exists ...
	I1123 08:25:56.766139   58528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:25:56.766189   58528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-990595-m03
	I1123 08:25:56.784698   58528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/ha-990595-m03/id_rsa Username:docker}
	I1123 08:25:56.893181   58528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:25:56.907841   58528 kubeconfig.go:125] found "ha-990595" server: "https://192.168.49.254:8443"
	I1123 08:25:56.907867   58528 api_server.go:166] Checking apiserver status ...
	I1123 08:25:56.907914   58528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:25:56.921218   58528 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1357/cgroup
	I1123 08:25:56.929931   58528 api_server.go:182] apiserver freezer: "3:freezer:/docker/4c6b44aee5345b517ce8a5944017fc137ef0ba09c48c7846e19aeb1486a4d293/kubepods/burstable/pod34b90b83b1797b2fc34d3985d897f9fa/6175e47591f89a466c66973230ac728f8ac49b0ed52fe0481fd264bc47b3d577"
	I1123 08:25:56.930017   58528 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4c6b44aee5345b517ce8a5944017fc137ef0ba09c48c7846e19aeb1486a4d293/kubepods/burstable/pod34b90b83b1797b2fc34d3985d897f9fa/6175e47591f89a466c66973230ac728f8ac49b0ed52fe0481fd264bc47b3d577/freezer.state
	I1123 08:25:56.937955   58528 api_server.go:204] freezer state: "THAWED"
	I1123 08:25:56.938034   58528 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:25:56.946602   58528 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:25:56.946633   58528 status.go:463] ha-990595-m03 apiserver status = Running (err=<nil>)
	I1123 08:25:56.946674   58528 status.go:176] ha-990595-m03 status: &{Name:ha-990595-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:25:56.946696   58528 status.go:174] checking status of ha-990595-m04 ...
	I1123 08:25:56.947002   58528 cli_runner.go:164] Run: docker container inspect ha-990595-m04 --format={{.State.Status}}
	I1123 08:25:56.969315   58528 status.go:371] ha-990595-m04 host status = "Running" (err=<nil>)
	I1123 08:25:56.969341   58528 host.go:66] Checking if "ha-990595-m04" exists ...
	I1123 08:25:56.969717   58528 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-990595-m04
	I1123 08:25:57.007901   58528 host.go:66] Checking if "ha-990595-m04" exists ...
	I1123 08:25:57.008270   58528 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:25:57.008318   58528 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-990595-m04
	I1123 08:25:57.036206   58528 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/ha-990595-m04/id_rsa Username:docker}
	I1123 08:25:57.146023   58528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:25:57.169794   58528 status.go:176] ha-990595-m04 status: &{Name:ha-990595-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 node start m02 --alsologtostderr -v 5
E1123 08:26:02.172605    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:02.178942    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:02.190302    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:02.211646    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:02.253905    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:02.335287    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:02.496935    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:02.818959    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:03.461226    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:04.743408    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:07.305231    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 node start m02 --alsologtostderr -v 5: (13.505184953s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5
E1123 08:26:12.427308    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5: (1.258281439s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.274979923s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 stop --alsologtostderr -v 5
E1123 08:26:22.668565    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:43.150412    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 stop --alsologtostderr -v 5: (37.806704511s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 start --wait true --alsologtostderr -v 5
E1123 08:27:24.112455    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 start --wait true --alsologtostderr -v 5: (1m1.478850255s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (99.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 node delete m03 --alsologtostderr -v 5: (10.391486917s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 stop --alsologtostderr -v 5: (36.184725414s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5: exit status 7 (132.653501ms)

                                                
                                                
-- stdout --
	ha-990595
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-990595-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-990595-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:28:42.073501   73275 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:28:42.073650   73275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:42.073660   73275 out.go:374] Setting ErrFile to fd 2...
	I1123 08:28:42.073666   73275 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:42.073951   73275 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:28:42.074158   73275 out.go:368] Setting JSON to false
	I1123 08:28:42.074205   73275 mustload.go:66] Loading cluster: ha-990595
	I1123 08:28:42.074282   73275 notify.go:221] Checking for updates...
	I1123 08:28:42.075289   73275 config.go:182] Loaded profile config "ha-990595": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:28:42.075316   73275 status.go:174] checking status of ha-990595 ...
	I1123 08:28:42.075927   73275 cli_runner.go:164] Run: docker container inspect ha-990595 --format={{.State.Status}}
	I1123 08:28:42.098856   73275 status.go:371] ha-990595 host status = "Stopped" (err=<nil>)
	I1123 08:28:42.098881   73275 status.go:384] host is not running, skipping remaining checks
	I1123 08:28:42.098889   73275 status.go:176] ha-990595 status: &{Name:ha-990595 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:28:42.098916   73275 status.go:174] checking status of ha-990595-m02 ...
	I1123 08:28:42.099279   73275 cli_runner.go:164] Run: docker container inspect ha-990595-m02 --format={{.State.Status}}
	I1123 08:28:42.134461   73275 status.go:371] ha-990595-m02 host status = "Stopped" (err=<nil>)
	I1123 08:28:42.134492   73275 status.go:384] host is not running, skipping remaining checks
	I1123 08:28:42.134500   73275 status.go:176] ha-990595-m02 status: &{Name:ha-990595-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:28:42.134520   73275 status.go:174] checking status of ha-990595-m04 ...
	I1123 08:28:42.134864   73275 cli_runner.go:164] Run: docker container inspect ha-990595-m04 --format={{.State.Status}}
	I1123 08:28:42.156023   73275 status.go:371] ha-990595-m04 host status = "Stopped" (err=<nil>)
	I1123 08:28:42.156048   73275 status.go:384] host is not running, skipping remaining checks
	I1123 08:28:42.156056   73275 status.go:176] ha-990595-m04 status: &{Name:ha-990595-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (67.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1123 08:28:46.033819    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:28:58.897814    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (1m6.166007635s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (67.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (86.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 node add --control-plane --alsologtostderr -v 5
E1123 08:31:02.172597    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 node add --control-plane --alsologtostderr -v 5: (1m25.232939222s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-990595 status --alsologtostderr -v 5: (1.112863686s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (86.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.10965526s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.11s)

                                                
                                    
x
+
TestJSONOutput/start/Command (81.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-842214 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1123 08:31:29.875305    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-842214 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m21.405713177s)
--- PASS: TestJSONOutput/start/Command (81.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-842214 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.62s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-842214 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.62s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-842214 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-842214 --output=json --user=testUser: (6.133466s)
--- PASS: TestJSONOutput/stop/Command (6.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-943993 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-943993 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (92.284808ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"1fe0011f-6b6b-4a96-a426-b12407d0df6c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-943993] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e4e3a734-3ff9-4447-a61e-a23781158206","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21969"}}
	{"specversion":"1.0","id":"afb1eb3b-937e-485d-bf34-30a07bfb4e2b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"ea3eacbf-ebf1-4298-af32-c22df89bf4ad","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig"}}
	{"specversion":"1.0","id":"a975ddb4-f732-4b4c-906f-f800e3e68b91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube"}}
	{"specversion":"1.0","id":"d18cd36b-a394-4b4c-bd80-c35cc491cf22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"dd0b47a5-699c-4cd4-9eb4-7297157ab11f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"07d03687-d457-4f88-866f-9b4cbdaedb68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-943993" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-943993
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (44.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-998181 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-998181 --network=: (41.973461833s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-998181" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-998181
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-998181: (2.295613052s)
--- PASS: TestKicCustomNetwork/create_custom_network (44.29s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (35.87s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-099870 --network=bridge
E1123 08:33:58.897632    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-099870 --network=bridge: (33.666065801s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-099870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-099870
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-099870: (2.173105359s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (35.87s)

                                                
                                    
x
+
TestKicExistingNetwork (35.69s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 08:34:20.096268    4624 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 08:34:20.114170    4624 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 08:34:20.114248    4624 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 08:34:20.114265    4624 cli_runner.go:164] Run: docker network inspect existing-network
W1123 08:34:20.129632    4624 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 08:34:20.129665    4624 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 08:34:20.129681    4624 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 08:34:20.129785    4624 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 08:34:20.147008    4624 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-a5ab12b2c3b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4e:c9:6d:7b:80:76} reservation:<nil>}
I1123 08:34:20.147291    4624 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x400193b330}
I1123 08:34:20.147313    4624 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 08:34:20.147390    4624 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 08:34:20.207798    4624 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-816939 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-816939 --network=existing-network: (33.466558616s)
helpers_test.go:175: Cleaning up "existing-network-816939" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-816939
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-816939: (2.081099269s)
I1123 08:34:55.773456    4624 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.69s)

                                                
                                    
x
+
TestKicCustomSubnet (36.95s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-624231 --subnet=192.168.60.0/24
E1123 08:35:21.974631    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-624231 --subnet=192.168.60.0/24: (34.666334561s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-624231 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-624231" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-624231
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-624231: (2.257909345s)
--- PASS: TestKicCustomSubnet (36.95s)

                                                
                                    
x
+
TestKicStaticIP (36.25s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-658273 --static-ip=192.168.200.200
E1123 08:36:02.174438    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-658273 --static-ip=192.168.200.200: (33.787723947s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-658273 ip
helpers_test.go:175: Cleaning up "static-ip-658273" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-658273
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-658273: (2.309153304s)
--- PASS: TestKicStaticIP (36.25s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (68.49s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-292043 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-292043 --driver=docker  --container-runtime=containerd: (30.006290898s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-295029 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-295029 --driver=docker  --container-runtime=containerd: (32.415244537s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-292043
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-295029
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-295029" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-295029
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-295029: (2.129405795s)
helpers_test.go:175: Cleaning up "first-292043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-292043
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-292043: (2.459936799s)
--- PASS: TestMinikubeProfile (68.49s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-719164 --memory=3072 --mount-string /tmp/TestMountStartserial4192221944/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-719164 --memory=3072 --mount-string /tmp/TestMountStartserial4192221944/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.408756838s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-719164 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.17s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-721051 --memory=3072 --mount-string /tmp/TestMountStartserial4192221944/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-721051 --memory=3072 --mount-string /tmp/TestMountStartserial4192221944/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.170463079s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.17s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-721051 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-719164 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-719164 --alsologtostderr -v=5: (1.692154262s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-721051 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-721051
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-721051: (1.281479968s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.48s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-721051
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-721051: (6.477697509s)
--- PASS: TestMountStart/serial/RestartStopped (7.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-721051 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-040327 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1123 08:38:58.897287    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-040327 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m14.093732987s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-040327 -- rollout status deployment/busybox: (4.378794696s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-9sr5t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-t7hkb -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-9sr5t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-t7hkb -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-9sr5t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-t7hkb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.24s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-9sr5t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-9sr5t -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-t7hkb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-040327 -- exec busybox-7b57f96db7-t7hkb -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-040327 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-040327 -v=5 --alsologtostderr: (26.333563694s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.08s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-040327 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp testdata/cp-test.txt multinode-040327:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp multinode-040327:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1944158312/001/cp-test_multinode-040327.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp multinode-040327:/home/docker/cp-test.txt multinode-040327-m02:/home/docker/cp-test_multinode-040327_multinode-040327-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m02 "sudo cat /home/docker/cp-test_multinode-040327_multinode-040327-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp multinode-040327:/home/docker/cp-test.txt multinode-040327-m03:/home/docker/cp-test_multinode-040327_multinode-040327-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m03 "sudo cat /home/docker/cp-test_multinode-040327_multinode-040327-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp testdata/cp-test.txt multinode-040327-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp multinode-040327-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1944158312/001/cp-test_multinode-040327-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp multinode-040327-m02:/home/docker/cp-test.txt multinode-040327:/home/docker/cp-test_multinode-040327-m02_multinode-040327.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327 "sudo cat /home/docker/cp-test_multinode-040327-m02_multinode-040327.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp multinode-040327-m02:/home/docker/cp-test.txt multinode-040327-m03:/home/docker/cp-test_multinode-040327-m02_multinode-040327-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m03 "sudo cat /home/docker/cp-test_multinode-040327-m02_multinode-040327-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp testdata/cp-test.txt multinode-040327-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp multinode-040327-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1944158312/001/cp-test_multinode-040327-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp multinode-040327-m03:/home/docker/cp-test.txt multinode-040327:/home/docker/cp-test_multinode-040327-m03_multinode-040327.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327 "sudo cat /home/docker/cp-test_multinode-040327-m03_multinode-040327.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 cp multinode-040327-m03:/home/docker/cp-test.txt multinode-040327-m02:/home/docker/cp-test_multinode-040327-m03_multinode-040327-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 ssh -n multinode-040327-m02 "sudo cat /home/docker/cp-test_multinode-040327-m03_multinode-040327-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.90s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-040327 node stop m03: (1.306075668s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-040327 status: exit status 7 (570.613618ms)

                                                
                                                
-- stdout --
	multinode-040327
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-040327-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-040327-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-040327 status --alsologtostderr: exit status 7 (565.855677ms)

                                                
                                                
-- stdout --
	multinode-040327
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-040327-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-040327-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:40:50.136747  126289 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:40:50.136866  126289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:40:50.136877  126289 out.go:374] Setting ErrFile to fd 2...
	I1123 08:40:50.136883  126289 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:40:50.137121  126289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:40:50.137305  126289 out.go:368] Setting JSON to false
	I1123 08:40:50.137346  126289 mustload.go:66] Loading cluster: multinode-040327
	I1123 08:40:50.137423  126289 notify.go:221] Checking for updates...
	I1123 08:40:50.138619  126289 config.go:182] Loaded profile config "multinode-040327": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:40:50.138735  126289 status.go:174] checking status of multinode-040327 ...
	I1123 08:40:50.139450  126289 cli_runner.go:164] Run: docker container inspect multinode-040327 --format={{.State.Status}}
	I1123 08:40:50.160164  126289 status.go:371] multinode-040327 host status = "Running" (err=<nil>)
	I1123 08:40:50.160194  126289 host.go:66] Checking if "multinode-040327" exists ...
	I1123 08:40:50.160502  126289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-040327
	I1123 08:40:50.191393  126289 host.go:66] Checking if "multinode-040327" exists ...
	I1123 08:40:50.191859  126289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:40:50.191909  126289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-040327
	I1123 08:40:50.209727  126289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/multinode-040327/id_rsa Username:docker}
	I1123 08:40:50.313251  126289 ssh_runner.go:195] Run: systemctl --version
	I1123 08:40:50.320861  126289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:40:50.336904  126289 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:40:50.402424  126289 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-23 08:40:50.393138807 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pat
h:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:40:50.402974  126289 kubeconfig.go:125] found "multinode-040327" server: "https://192.168.67.2:8443"
	I1123 08:40:50.403011  126289 api_server.go:166] Checking apiserver status ...
	I1123 08:40:50.403055  126289 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:40:50.416035  126289 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1355/cgroup
	I1123 08:40:50.424504  126289 api_server.go:182] apiserver freezer: "3:freezer:/docker/cad59ffa7d93492639ecf723aa67f6600bff564d1e95350c8a8088febc3ecd79/kubepods/burstable/pod127fa7e293e142e92078746536ac0e0a/817978a164f4d5215e4a35f594a82ac9e5becd28b120c184a8cd297ec0ea0f51"
	I1123 08:40:50.424583  126289 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/cad59ffa7d93492639ecf723aa67f6600bff564d1e95350c8a8088febc3ecd79/kubepods/burstable/pod127fa7e293e142e92078746536ac0e0a/817978a164f4d5215e4a35f594a82ac9e5becd28b120c184a8cd297ec0ea0f51/freezer.state
	I1123 08:40:50.432875  126289 api_server.go:204] freezer state: "THAWED"
	I1123 08:40:50.432900  126289 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 08:40:50.441109  126289 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 08:40:50.441137  126289 status.go:463] multinode-040327 apiserver status = Running (err=<nil>)
	I1123 08:40:50.441148  126289 status.go:176] multinode-040327 status: &{Name:multinode-040327 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:40:50.441163  126289 status.go:174] checking status of multinode-040327-m02 ...
	I1123 08:40:50.441468  126289 cli_runner.go:164] Run: docker container inspect multinode-040327-m02 --format={{.State.Status}}
	I1123 08:40:50.459576  126289 status.go:371] multinode-040327-m02 host status = "Running" (err=<nil>)
	I1123 08:40:50.459597  126289 host.go:66] Checking if "multinode-040327-m02" exists ...
	I1123 08:40:50.459905  126289 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-040327-m02
	I1123 08:40:50.482764  126289 host.go:66] Checking if "multinode-040327-m02" exists ...
	I1123 08:40:50.483083  126289 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:40:50.483119  126289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-040327-m02
	I1123 08:40:50.499527  126289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32914 SSHKeyPath:/home/jenkins/minikube-integration/21969-2811/.minikube/machines/multinode-040327-m02/id_rsa Username:docker}
	I1123 08:40:50.608903  126289 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:40:50.622292  126289 status.go:176] multinode-040327-m02 status: &{Name:multinode-040327-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:40:50.622324  126289 status.go:174] checking status of multinode-040327-m03 ...
	I1123 08:40:50.622646  126289 cli_runner.go:164] Run: docker container inspect multinode-040327-m03 --format={{.State.Status}}
	I1123 08:40:50.640981  126289 status.go:371] multinode-040327-m03 host status = "Stopped" (err=<nil>)
	I1123 08:40:50.641001  126289 status.go:384] host is not running, skipping remaining checks
	I1123 08:40:50.641007  126289 status.go:176] multinode-040327-m03 status: &{Name:multinode-040327-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.44s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-040327 node start m03 -v=5 --alsologtostderr: (7.112933982s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.93s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-040327
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-040327
E1123 08:41:02.173178    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-040327: (25.491723119s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-040327 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-040327 --wait=true -v=5 --alsologtostderr: (52.074531693s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-040327
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.69s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-040327 node delete m03: (4.977698559s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.68s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 stop
E1123 08:42:25.237430    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-040327 stop: (23.892170035s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-040327 status: exit status 7 (96.482239ms)

                                                
                                                
-- stdout --
	multinode-040327
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-040327-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-040327 status --alsologtostderr: exit status 7 (98.278526ms)

                                                
                                                
-- stdout --
	multinode-040327
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-040327-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:42:45.988324  134995 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:42:45.988443  134995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:42:45.988455  134995 out.go:374] Setting ErrFile to fd 2...
	I1123 08:42:45.988460  134995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:42:45.988715  134995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:42:45.988883  134995 out.go:368] Setting JSON to false
	I1123 08:42:45.988923  134995 mustload.go:66] Loading cluster: multinode-040327
	I1123 08:42:45.989000  134995 notify.go:221] Checking for updates...
	I1123 08:42:45.989885  134995 config.go:182] Loaded profile config "multinode-040327": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:42:45.989909  134995 status.go:174] checking status of multinode-040327 ...
	I1123 08:42:45.990548  134995 cli_runner.go:164] Run: docker container inspect multinode-040327 --format={{.State.Status}}
	I1123 08:42:46.010495  134995 status.go:371] multinode-040327 host status = "Stopped" (err=<nil>)
	I1123 08:42:46.010523  134995 status.go:384] host is not running, skipping remaining checks
	I1123 08:42:46.010531  134995 status.go:176] multinode-040327 status: &{Name:multinode-040327 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:42:46.010566  134995 status.go:174] checking status of multinode-040327-m02 ...
	I1123 08:42:46.010887  134995 cli_runner.go:164] Run: docker container inspect multinode-040327-m02 --format={{.State.Status}}
	I1123 08:42:46.038085  134995 status.go:371] multinode-040327-m02 host status = "Stopped" (err=<nil>)
	I1123 08:42:46.038110  134995 status.go:384] host is not running, skipping remaining checks
	I1123 08:42:46.038126  134995 status.go:176] multinode-040327-m02 status: &{Name:multinode-040327-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.09s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (49.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-040327 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-040327 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (49.018346194s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-040327 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (49.71s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (37.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-040327
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-040327-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-040327-m02 --driver=docker  --container-runtime=containerd: exit status 14 (88.063348ms)

                                                
                                                
-- stdout --
	* [multinode-040327-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-040327-m02' is duplicated with machine name 'multinode-040327-m02' in profile 'multinode-040327'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-040327-m03 --driver=docker  --container-runtime=containerd
E1123 08:43:58.897646    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-040327-m03 --driver=docker  --container-runtime=containerd: (34.644159971s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-040327
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-040327: exit status 80 (342.05263ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-040327 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-040327-m03 already exists in multinode-040327-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-040327-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-040327-m03: (2.143339353s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (37.27s)

                                                
                                    
x
+
TestPreload (126.49s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-858829 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-858829 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (55.492012882s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-858829 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-858829 image pull gcr.io/k8s-minikube/busybox: (2.164287641s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-858829
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-858829: (5.922946804s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-858829 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
E1123 08:46:02.172787    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-858829 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (1m0.230559619s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-858829 image list
helpers_test.go:175: Cleaning up "test-preload-858829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-858829
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-858829: (2.448320256s)
--- PASS: TestPreload (126.49s)

                                                
                                    
x
+
TestScheduledStopUnix (110.42s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-981115 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-981115 --memory=3072 --driver=docker  --container-runtime=containerd: (33.544719761s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-981115 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:46:57.471824  150897 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:46:57.472140  150897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:57.472204  150897 out.go:374] Setting ErrFile to fd 2...
	I1123 08:46:57.472227  150897 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:57.472685  150897 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:46:57.473105  150897 out.go:368] Setting JSON to false
	I1123 08:46:57.473338  150897 mustload.go:66] Loading cluster: scheduled-stop-981115
	I1123 08:46:57.474245  150897 config.go:182] Loaded profile config "scheduled-stop-981115": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:57.474382  150897 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/config.json ...
	I1123 08:46:57.474607  150897 mustload.go:66] Loading cluster: scheduled-stop-981115
	I1123 08:46:57.474788  150897 config.go:182] Loaded profile config "scheduled-stop-981115": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-981115 -n scheduled-stop-981115
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-981115 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:46:57.949782  150986 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:46:57.949959  150986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:57.949974  150986 out.go:374] Setting ErrFile to fd 2...
	I1123 08:46:57.949980  150986 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:46:57.950247  150986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:46:57.950501  150986 out.go:368] Setting JSON to false
	I1123 08:46:57.950732  150986 daemonize_unix.go:73] killing process 150913 as it is an old scheduled stop
	I1123 08:46:57.950824  150986 mustload.go:66] Loading cluster: scheduled-stop-981115
	I1123 08:46:57.951384  150986 config.go:182] Loaded profile config "scheduled-stop-981115": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:46:57.951466  150986 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/config.json ...
	I1123 08:46:57.951649  150986 mustload.go:66] Loading cluster: scheduled-stop-981115
	I1123 08:46:57.951769  150986 config.go:182] Loaded profile config "scheduled-stop-981115": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 08:46:57.959193    4624 retry.go:31] will retry after 85.455µs: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.960239    4624 retry.go:31] will retry after 101.079µs: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.962975    4624 retry.go:31] will retry after 298.996µs: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.964221    4624 retry.go:31] will retry after 392.86µs: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.965370    4624 retry.go:31] will retry after 464.541µs: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.966555    4624 retry.go:31] will retry after 1.111323ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.968722    4624 retry.go:31] will retry after 1.307806ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.972698    4624 retry.go:31] will retry after 1.274303ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.975829    4624 retry.go:31] will retry after 2.094512ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.979097    4624 retry.go:31] will retry after 3.075808ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.982612    4624 retry.go:31] will retry after 6.651762ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.989836    4624 retry.go:31] will retry after 6.39819ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:57.997111    4624 retry.go:31] will retry after 14.655924ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:58.012372    4624 retry.go:31] will retry after 12.350395ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:58.025579    4624 retry.go:31] will retry after 38.468547ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
I1123 08:46:58.064765    4624 retry.go:31] will retry after 30.156019ms: open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-981115 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-981115 -n scheduled-stop-981115
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-981115
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-981115 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:47:23.910829  151665 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:47:23.910953  151665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:47:23.910964  151665 out.go:374] Setting ErrFile to fd 2...
	I1123 08:47:23.910970  151665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:47:23.911231  151665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:47:23.911519  151665 out.go:368] Setting JSON to false
	I1123 08:47:23.911618  151665 mustload.go:66] Loading cluster: scheduled-stop-981115
	I1123 08:47:23.911991  151665 config.go:182] Loaded profile config "scheduled-stop-981115": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:47:23.912066  151665 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/scheduled-stop-981115/config.json ...
	I1123 08:47:23.912257  151665 mustload.go:66] Loading cluster: scheduled-stop-981115
	I1123 08:47:23.912374  151665 config.go:182] Loaded profile config "scheduled-stop-981115": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-981115
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-981115: exit status 7 (71.120182ms)

                                                
                                                
-- stdout --
	scheduled-stop-981115
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-981115 -n scheduled-stop-981115
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-981115 -n scheduled-stop-981115: exit status 7 (73.522998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-981115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-981115
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-981115: (5.231361697s)
--- PASS: TestScheduledStopUnix (110.42s)

                                                
                                    
x
+
TestInsufficientStorage (12.71s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-851835 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-851835 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.133920969s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"54531482-9aa1-44b1-bbf7-68091a695bbb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-851835] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e713024e-228e-427f-81a9-1043ece5c730","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21969"}}
	{"specversion":"1.0","id":"2754f499-50d0-48bb-9d33-93aad67f0fea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4a9d5f7b-9325-46e8-beb7-802b041515d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig"}}
	{"specversion":"1.0","id":"f38df58a-4a34-4ad6-8c1b-e43a45dc27f8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube"}}
	{"specversion":"1.0","id":"e30cbb71-c00e-4693-ae5b-c7a3bfae0a1d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"4ee1a8f6-0d4f-42e0-95d8-db84b537369a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"80f4654d-99fd-4495-9f4b-d391d66068cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"eca6a4e4-c586-4463-8e79-a43cbb820d4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"333b15be-b56a-45ef-8688-75125d930408","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"03102a6c-fa06-4f59-b815-11e61ae2faac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"abdbd2bc-49be-4156-b21e-b59fd289f5ed","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-851835\" primary control-plane node in \"insufficient-storage-851835\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"cb89303a-e77f-4768-aa0c-dd1d50a54b32","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"2829b2ea-fad7-46eb-9ec0-d75108884123","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"109453d0-11d5-499d-aa77-5f708f12f40d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-851835 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-851835 --output=json --layout=cluster: exit status 7 (309.484097ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-851835","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-851835","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:48:24.722039  153495 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-851835" does not appear in /home/jenkins/minikube-integration/21969-2811/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-851835 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-851835 --output=json --layout=cluster: exit status 7 (297.170797ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-851835","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-851835","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:48:25.017959  153562 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-851835" does not appear in /home/jenkins/minikube-integration/21969-2811/kubeconfig
	E1123 08:48:25.028692  153562 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/insufficient-storage-851835/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-851835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-851835
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-851835: (1.965301617s)
--- PASS: TestInsufficientStorage (12.71s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (63.94s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2724714999 start -p running-upgrade-276566 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1123 08:52:01.975932    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2724714999 start -p running-upgrade-276566 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (35.759110678s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-276566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-276566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (25.144853365s)
helpers_test.go:175: Cleaning up "running-upgrade-276566" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-276566
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-276566: (2.125090152s)
--- PASS: TestRunningBinaryUpgrade (63.94s)

                                                
                                    
x
+
TestKubernetesUpgrade (354.32s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-291582 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-291582 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.301404406s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-291582
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-291582: (1.322958083s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-291582 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-291582 status --format={{.Host}}: exit status 7 (68.634769ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-291582 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-291582 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m54.308743374s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-291582 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-291582 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-291582 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (129.689553ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-291582] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-291582
	    minikube start -p kubernetes-upgrade-291582 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2915822 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-291582 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-291582 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-291582 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (18.703222136s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-291582" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-291582
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-291582: (2.355708255s)
--- PASS: TestKubernetesUpgrade (354.32s)

                                                
                                    
x
+
TestMissingContainerUpgrade (142.32s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.96312810 start -p missing-upgrade-624973 --memory=3072 --driver=docker  --container-runtime=containerd
E1123 08:48:58.897626    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.96312810 start -p missing-upgrade-624973 --memory=3072 --driver=docker  --container-runtime=containerd: (58.934243257s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-624973
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-624973
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-624973 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-624973 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m17.371348836s)
helpers_test.go:175: Cleaning up "missing-upgrade-624973" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-624973
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-624973: (4.380710543s)
--- PASS: TestMissingContainerUpgrade (142.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-999853 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-999853 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (100.183745ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-999853] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-999853 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-999853 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.53852722s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-999853 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-999853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-999853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.266144987s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-999853 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-999853 status -o json: exit status 2 (305.211322ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-999853","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-999853
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-999853: (1.977932006s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.55s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-999853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-999853 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (7.220063334s)
--- PASS: TestNoKubernetes/serial/Start (7.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21969-2811/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-999853 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-999853 "sudo systemctl is-active --quiet service kubelet": exit status 1 (280.011352ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-999853
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-999853: (1.294901133s)
--- PASS: TestNoKubernetes/serial/Stop (1.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-999853 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-999853 --driver=docker  --container-runtime=containerd: (6.832508594s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-999853 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-999853 "sudo systemctl is-active --quiet service kubelet": exit status 1 (376.030986ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.82s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.82s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (52.99s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.745734913 start -p stopped-upgrade-485625 --memory=3072 --vm-driver=docker  --container-runtime=containerd
E1123 08:51:02.172716    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.745734913 start -p stopped-upgrade-485625 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (34.75624057s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.745734913 -p stopped-upgrade-485625 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.745734913 -p stopped-upgrade-485625 stop: (1.276672459s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-485625 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-485625 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (16.95486008s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (52.99s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-485625
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-485625: (1.408887333s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.41s)

                                                
                                    
x
+
TestPause/serial/Start (83.36s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-534426 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
E1123 08:53:58.897461    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-534426 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (1m23.362606664s)
--- PASS: TestPause/serial/Start (83.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.05s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-534426 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-534426 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (7.038848305s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.05s)

                                                
                                    
x
+
TestPause/serial/Pause (0.74s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-534426 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.74s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-534426 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-534426 --output=json --layout=cluster: exit status 2 (336.509569ms)

                                                
                                                
-- stdout --
	{"Name":"pause-534426","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-534426","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.77s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-534426 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.77s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.06s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-534426 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-534426 --alsologtostderr -v=5: (1.060834994s)
--- PASS: TestPause/serial/PauseAgain (1.06s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.95s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-534426 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-534426 --alsologtostderr -v=5: (2.949238077s)
--- PASS: TestPause/serial/DeletePaused (2.95s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-534426
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-534426: exit status 1 (17.757094ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-534426: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-694698 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-694698 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (278.112901ms)

                                                
                                                
-- stdout --
	* [false-694698] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:55:22.624092  194778 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:55:22.624305  194778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:55:22.624327  194778 out.go:374] Setting ErrFile to fd 2...
	I1123 08:55:22.624350  194778 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:55:22.624631  194778 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-2811/.minikube/bin
	I1123 08:55:22.625055  194778 out.go:368] Setting JSON to false
	I1123 08:55:22.625982  194778 start.go:133] hostinfo: {"hostname":"ip-172-31-24-2","uptime":5875,"bootTime":1763882248,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"6d436adf-771e-4269-b9a3-c25fd4fca4f5"}
	I1123 08:55:22.626076  194778 start.go:143] virtualization:  
	I1123 08:55:22.629758  194778 out.go:179] * [false-694698] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1123 08:55:22.633036  194778 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:55:22.633124  194778 notify.go:221] Checking for updates...
	I1123 08:55:22.639974  194778 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:55:22.642974  194778 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-2811/kubeconfig
	I1123 08:55:22.646665  194778 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-2811/.minikube
	I1123 08:55:22.649584  194778 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1123 08:55:22.652419  194778 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:55:22.655908  194778 config.go:182] Loaded profile config "kubernetes-upgrade-291582": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1123 08:55:22.656018  194778 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:55:22.707715  194778 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1123 08:55:22.707836  194778 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:55:22.813251  194778 info.go:266] docker info: {ID:J4M5:W6MX:GOX4:4LAQ:VI7E:VJNF:J3OP:OPBH:GF7G:PPY4:WQWD:7N4L Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-23 08:55:22.80275644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-24-2 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path
:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1123 08:55:22.813353  194778 docker.go:319] overlay module found
	I1123 08:55:22.817173  194778 out.go:179] * Using the docker driver based on user configuration
	I1123 08:55:22.820003  194778 start.go:309] selected driver: docker
	I1123 08:55:22.820048  194778 start.go:927] validating driver "docker" against <nil>
	I1123 08:55:22.820063  194778 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:55:22.823393  194778 out.go:203] 
	W1123 08:55:22.826408  194778 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1123 08:55:22.829315  194778 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-694698 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-694698" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:55:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-291582
contexts:
- context:
cluster: kubernetes-upgrade-291582
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:55:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-291582
name: kubernetes-upgrade-291582
current-context: kubernetes-upgrade-291582
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-291582
user:
client-certificate: /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/kubernetes-upgrade-291582/client.crt
client-key: /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/kubernetes-upgrade-291582/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-694698

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694698"

                                                
                                                
----------------------- debugLogs end: false-694698 [took: 4.858136111s] --------------------------------
helpers_test.go:175: Cleaning up "false-694698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-694698
--- PASS: TestNetworkPlugins/group/false (5.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (60.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m0.07194766s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (60.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-132097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-132097 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.086269698s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-132097 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-132097 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-132097 --alsologtostderr -v=3: (12.19116777s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-132097 -n old-k8s-version-132097
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-132097 -n old-k8s-version-132097: exit status 7 (74.365015ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-132097 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (48.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1123 08:58:58.897393    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:59:05.238812    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/functional-177240/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-132097 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (47.767405253s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-132097 -n old-k8s-version-132097
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (48.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-b8msx" [7bef0e4d-2989-4872-ac8c-fd16ae88b26b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00393106s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-b8msx" [7bef0e4d-2989-4872-ac8c-fd16ae88b26b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00316048s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-132097 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-132097 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-132097 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-132097 -n old-k8s-version-132097
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-132097 -n old-k8s-version-132097: exit status 2 (362.202107ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-132097 -n old-k8s-version-132097
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-132097 -n old-k8s-version-132097: exit status 2 (350.483281ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-132097 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-132097 -n old-k8s-version-132097
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-132097 -n old-k8s-version-132097
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.68s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-118762 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-118762 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m27.684487597s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (87.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (89.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m29.768533626s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (89.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-118762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-118762 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.013890567s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-118762 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-118762 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-118762 --alsologtostderr -v=3: (12.16444397s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-672503 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-672503 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-672503 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-672503 --alsologtostderr -v=3: (12.167437693s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762: exit status 7 (65.690573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-118762 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-118762 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-118762 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (55.190862635s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (55.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-672503 -n embed-certs-672503
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-672503 -n embed-certs-672503: exit status 7 (144.816152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-672503 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (55.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-672503 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (54.730529796s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-672503 -n embed-certs-672503
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (55.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l75wx" [87e0c878-4f2f-4569-9973-5efe1c61835a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004069133s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l75wx" [87e0c878-4f2f-4569-9973-5efe1c61835a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003437043s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-118762 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7qztt" [fc90b914-6e6f-4064-aff8-da625c5dd46e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003098203s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-118762 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-118762 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762: exit status 2 (349.678837ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762: exit status 2 (379.797239ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-118762 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-118762 -n default-k8s-diff-port-118762
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7qztt" [fc90b914-6e6f-4064-aff8-da625c5dd46e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002988461s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-672503 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.27s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-052851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-052851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m16.269993383s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.27s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-672503 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-672503 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-672503 -n embed-certs-672503
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-672503 -n embed-certs-672503: exit status 2 (505.571204ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-672503 -n embed-certs-672503
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-672503 -n embed-certs-672503: exit status 2 (422.698508ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-672503 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-672503 -n embed-certs-672503
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-672503 -n embed-certs-672503
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.35s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-948460 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1123 09:02:58.270139    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:02:58.276485    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:02:58.287839    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:02:58.309203    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:02:58.350528    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:02:58.432187    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:02:58.593680    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:02:58.915522    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:02:59.557518    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:03:00.839404    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:03:03.400697    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:03:08.522380    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:03:18.764528    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:03:39.246527    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-948460 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (47.439553924s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-948460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-948460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.595799151s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.61s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-948460 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-948460 --alsologtostderr -v=3: (1.61145247s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948460 -n newest-cni-948460
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948460 -n newest-cni-948460: exit status 7 (121.478939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-948460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.43s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-948460 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1123 09:03:58.897608    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-948460 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (16.977749774s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-948460 -n newest-cni-948460
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.43s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-948460 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-948460 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948460 -n newest-cni-948460
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948460 -n newest-cni-948460: exit status 2 (390.827585ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-948460 -n newest-cni-948460
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-948460 -n newest-cni-948460: exit status 2 (336.627985ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-948460 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-948460 -n newest-cni-948460
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-948460 -n newest-cni-948460
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (88.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m28.562678038s)
--- PASS: TestNetworkPlugins/group/auto/Start (88.56s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-052851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-052851 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.314411605s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-052851 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-052851 --alsologtostderr -v=3
E1123 09:04:20.208555    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-052851 --alsologtostderr -v=3: (12.163355633s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-052851 -n no-preload-052851
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-052851 -n no-preload-052851: exit status 7 (92.888714ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-052851 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (57.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-052851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-052851 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (56.962761128s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-052851 -n no-preload-052851
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (57.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wn2jx" [c94d679b-b72b-42fb-a573-26648a144278] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004287967s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wn2jx" [c94d679b-b72b-42fb-a573-26648a144278] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004856505s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-052851 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-694698 "pgrep -a kubelet"
I1123 09:05:36.515747    4624 config.go:182] Loaded profile config "auto-694698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-694698 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dxn9b" [a0037bb3-162b-4eab-8b33-de555ad03016] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-dxn9b" [a0037bb3-162b-4eab-8b33-de555ad03016] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003561654s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-052851 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.33s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-052851 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p no-preload-052851 --alsologtostderr -v=1: (1.026958947s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-052851 -n no-preload-052851
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-052851 -n no-preload-052851: exit status 2 (442.602561ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-052851 -n no-preload-052851
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-052851 -n no-preload-052851: exit status 2 (410.89881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-052851 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-052851 -n no-preload-052851
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-052851 -n no-preload-052851
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.56s)
E1123 09:11:17.778368    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (88.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m28.398295145s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (88.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-694698 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1123 09:06:21.207204    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:06:41.688529    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m4.332835572s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-8dwcs" [dd506ebe-f789-44e6-9c83-28b8f855c193] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004788896s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-2bfbh" [1f2c9c3a-36c8-47c0-9e70-07081a78589b] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-2bfbh" [1f2c9c3a-36c8-47c0-9e70-07081a78589b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.004952391s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-694698 "pgrep -a kubelet"
I1123 09:07:20.415677    4624 config.go:182] Loaded profile config "kindnet-694698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-694698 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tjh8j" [90f09c17-088a-4731-bc86-63bc64799b87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1123 09:07:22.650551    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-tjh8j" [90f09c17-088a-4731-bc86-63bc64799b87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003413706s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-694698 "pgrep -a kubelet"
I1123 09:07:24.404727    4624 config.go:182] Loaded profile config "calico-694698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-694698 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-lwlsg" [6a03d39b-215c-4187-84b7-0572437277a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-lwlsg" [6a03d39b-215c-4187-84b7-0572437277a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004015557s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-694698 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-694698 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
E1123 09:07:58.269997    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m1.25063764s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (82.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1123 09:08:25.972971    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/old-k8s-version-132097/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:41.977746    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:44.572229    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/default-k8s-diff-port-118762/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (1m22.506910918s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (82.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-694698 "pgrep -a kubelet"
E1123 09:08:58.897876    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/addons-698781/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1123 09:08:58.946831    4624 config.go:182] Loaded profile config "custom-flannel-694698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-694698 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-rhns5" [a3a131a7-04b1-422f-863a-9afaf9397dd4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1123 09:09:00.288048    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:00.297177    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:00.335521    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:00.357750    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:00.399369    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:00.492753    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:00.654747    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:00.976996    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:01.619113    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:02.901501    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rhns5" [a3a131a7-04b1-422f-863a-9afaf9397dd4] Running
E1123 09:09:05.463460    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003329914s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-694698 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-694698 "pgrep -a kubelet"
I1123 09:09:25.910210    4624 config.go:182] Loaded profile config "enable-default-cni-694698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-694698 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mgwgb" [a5c66491-e8ff-4f59-ae68-c5a517ec6c31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mgwgb" [a5c66491-e8ff-4f59-ae68-c5a517ec6c31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004916754s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (65.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m5.98853482s)
--- PASS: TestNetworkPlugins/group/flannel/Start (65.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-694698 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1123 09:10:22.270118    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/no-preload-052851/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:36.802262    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:36.808628    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:36.820093    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:36.841807    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:36.883188    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:36.964535    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:37.126030    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:37.448360    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:38.089890    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-694698 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m14.680538068s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-77ksd" [8576fce3-217b-4fc0-834f-bb0da41c8a84] Running
E1123 09:10:39.372149    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:10:41.933614    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003264624s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-694698 "pgrep -a kubelet"
I1123 09:10:44.982950    4624 config.go:182] Loaded profile config "flannel-694698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-694698 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vn5z5" [b8fbb7ff-a7ef-4f34-97f1-2c1b4a3f34f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1123 09:10:47.054979    4624 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/auto-694698/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-vn5z5" [b8fbb7ff-a7ef-4f34-97f1-2c1b4a3f34f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.002859203s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-694698 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-694698 "pgrep -a kubelet"
I1123 09:11:16.862454    4624 config.go:182] Loaded profile config "bridge-694698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-694698 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7v4zf" [394f56c4-bd66-49a9-ad6e-27bcd5759be8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7v4zf" [394f56c4-bd66-49a9-ad6e-27bcd5759be8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004616283s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-694698 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-694698 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-461043 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-461043" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-461043
--- SKIP: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-209145" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-209145
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-694698 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-694698" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:50:50 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-291582
contexts:
- context:
cluster: kubernetes-upgrade-291582
user: kubernetes-upgrade-291582
name: kubernetes-upgrade-291582
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-291582
user:
client-certificate: /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/kubernetes-upgrade-291582/client.crt
client-key: /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/kubernetes-upgrade-291582/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-694698

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694698"

                                                
                                                
----------------------- debugLogs end: kubenet-694698 [took: 5.406533094s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-694698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-694698
--- SKIP: TestNetworkPlugins/group/kubenet (5.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-694698 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-694698" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21969-2811/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:55:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-291582
contexts:
- context:
cluster: kubernetes-upgrade-291582
extensions:
- extension:
last-update: Sun, 23 Nov 2025 08:55:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: kubernetes-upgrade-291582
name: kubernetes-upgrade-291582
current-context: kubernetes-upgrade-291582
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-291582
user:
client-certificate: /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/kubernetes-upgrade-291582/client.crt
client-key: /home/jenkins/minikube-integration/21969-2811/.minikube/profiles/kubernetes-upgrade-291582/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-694698

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-694698" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694698"

                                                
                                                
----------------------- debugLogs end: cilium-694698 [took: 6.058604004s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-694698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-694698
--- SKIP: TestNetworkPlugins/group/cilium (6.32s)

                                                
                                    
Copied to clipboard