Test Report: Docker_Linux_containerd_arm64 21934

                    
                      0ee4f00f81c855d6dbc5c3cb2cb1b494940d38dc:2025-11-22:42437
                    
                

Test fail (4/333)

Order failed test Duration
301 TestStartStop/group/old-k8s-version/serial/DeployApp 14.85
314 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 13.74
317 TestStartStop/group/embed-certs/serial/DeployApp 14.64
341 TestStartStop/group/no-preload/serial/DeployApp 16.54
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (14.85s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-187160 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [f6539f6c-3a59-4e72-b903-a218596cb332] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [f6539f6c-3a59-4e72-b903-a218596cb332] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003259659s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-187160 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-187160
helpers_test.go:243: (dbg) docker inspect old-k8s-version-187160:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2",
	        "Created": "2025-11-22T00:35:31.769486385Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204882,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:35:31.840763557Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2/hostname",
	        "HostsPath": "/var/lib/docker/containers/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2/hosts",
	        "LogPath": "/var/lib/docker/containers/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2-json.log",
	        "Name": "/old-k8s-version-187160",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-187160:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-187160",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2",
	                "LowerDir": "/var/lib/docker/overlay2/712cfcf4bdda99f3fb5a971120be94c4495350b452b5b4de871b8157be916fde-init/diff:/var/lib/docker/overlay2/7cce95e9587a813ce5f3ee5f28c6de3b78ed608010774b6d981aecaad739a571/diff",
	                "MergedDir": "/var/lib/docker/overlay2/712cfcf4bdda99f3fb5a971120be94c4495350b452b5b4de871b8157be916fde/merged",
	                "UpperDir": "/var/lib/docker/overlay2/712cfcf4bdda99f3fb5a971120be94c4495350b452b5b4de871b8157be916fde/diff",
	                "WorkDir": "/var/lib/docker/overlay2/712cfcf4bdda99f3fb5a971120be94c4495350b452b5b4de871b8157be916fde/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-187160",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-187160/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-187160",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-187160",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-187160",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "454be7a461b1abf0d012dc3454c28ec8e28206a70d925dc40668a3129b452d06",
	            "SandboxKey": "/var/run/docker/netns/454be7a461b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-187160": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:59:fc:b0:2c:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e57260d803e783f0c78a581231fefae2ba2ea5340f147424bbbb6f769732791",
	                    "EndpointID": "fbf38d70257f16dbf88c51dde15f08dd0cc2ad864def7c05de53ec66a3bd02e0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-187160",
	                        "2654618e6a6b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-187160 -n old-k8s-version-187160
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-187160 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-187160 logs -n 25: (1.241227038s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-482944 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo containerd config dump                                                                                                                                                                                                        │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo crio config                                                                                                                                                                                                                   │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ delete  │ -p cilium-482944                                                                                                                                                                                                                                    │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p force-systemd-env-115975 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-115975  │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-381698 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ start   │ -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-381698 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p kubernetes-upgrade-381698                                                                                                                                                                                                                        │ kubernetes-upgrade-381698 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-285797    │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ force-systemd-env-115975 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-115975  │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p force-systemd-env-115975                                                                                                                                                                                                                         │ force-systemd-env-115975  │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-options-089440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-089440       │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ cert-options-089440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-089440       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ -p cert-options-089440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-089440       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ delete  │ -p cert-options-089440                                                                                                                                                                                                                              │ cert-options-089440       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160    │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:35:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:35:25.495756  204491 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:35:25.496048  204491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:35:25.496081  204491 out.go:374] Setting ErrFile to fd 2...
	I1122 00:35:25.496105  204491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:35:25.496419  204491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:35:25.496952  204491 out.go:368] Setting JSON to false
	I1122 00:35:25.497971  204491 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4663,"bootTime":1763767063,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1122 00:35:25.498081  204491 start.go:143] virtualization:  
	I1122 00:35:25.501708  204491 out.go:179] * [old-k8s-version-187160] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:35:25.506412  204491 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:35:25.506475  204491 notify.go:221] Checking for updates...
	I1122 00:35:25.509912  204491 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:35:25.513456  204491 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:35:25.516592  204491 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1122 00:35:25.519636  204491 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:35:25.523211  204491 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:35:25.526641  204491 config.go:182] Loaded profile config "cert-expiration-285797": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:35:25.526787  204491 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:35:25.555005  204491 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:35:25.555124  204491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:35:25.622538  204491 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:35:25.61288294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:35:25.622642  204491 docker.go:319] overlay module found
	I1122 00:35:25.626050  204491 out.go:179] * Using the docker driver based on user configuration
	I1122 00:35:25.629025  204491 start.go:309] selected driver: docker
	I1122 00:35:25.629048  204491 start.go:930] validating driver "docker" against <nil>
	I1122 00:35:25.629062  204491 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:35:25.629920  204491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:35:25.686113  204491 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:35:25.676702728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:35:25.686272  204491 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:35:25.686516  204491 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:35:25.689987  204491 out.go:179] * Using Docker driver with root privileges
	I1122 00:35:25.693031  204491 cni.go:84] Creating CNI manager for ""
	I1122 00:35:25.693097  204491 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:35:25.693112  204491 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:35:25.693191  204491 start.go:353] cluster config:
	{Name:old-k8s-version-187160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:35:25.697920  204491 out.go:179] * Starting "old-k8s-version-187160" primary control-plane node in "old-k8s-version-187160" cluster
	I1122 00:35:25.700757  204491 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:35:25.703653  204491 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:35:25.706453  204491 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1122 00:35:25.706497  204491 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1122 00:35:25.706516  204491 cache.go:65] Caching tarball of preloaded images
	I1122 00:35:25.706539  204491 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:35:25.706598  204491 preload.go:238] Found /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1122 00:35:25.706609  204491 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1122 00:35:25.706742  204491 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/config.json ...
	I1122 00:35:25.706761  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/config.json: {Name:mk0e32effc08fb3e92cb6a10a0036ab11d9ac603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:25.725937  204491 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:35:25.725959  204491 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:35:25.725973  204491 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:35:25.725996  204491 start.go:360] acquireMachinesLock for old-k8s-version-187160: {Name:mk9a2f2e89734c88923a7d9b9f969c1a6370f913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:35:25.726113  204491 start.go:364] duration metric: took 96.034µs to acquireMachinesLock for "old-k8s-version-187160"
	I1122 00:35:25.726144  204491 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-187160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:35:25.726218  204491 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:35:25.729632  204491 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:35:25.729884  204491 start.go:159] libmachine.API.Create for "old-k8s-version-187160" (driver="docker")
	I1122 00:35:25.729925  204491 client.go:173] LocalClient.Create starting
	I1122 00:35:25.729999  204491 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem
	I1122 00:35:25.730048  204491 main.go:143] libmachine: Decoding PEM data...
	I1122 00:35:25.730067  204491 main.go:143] libmachine: Parsing certificate...
	I1122 00:35:25.730126  204491 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem
	I1122 00:35:25.730149  204491 main.go:143] libmachine: Decoding PEM data...
	I1122 00:35:25.730161  204491 main.go:143] libmachine: Parsing certificate...
	I1122 00:35:25.730541  204491 cli_runner.go:164] Run: docker network inspect old-k8s-version-187160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:35:25.747388  204491 cli_runner.go:211] docker network inspect old-k8s-version-187160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:35:25.747476  204491 network_create.go:284] running [docker network inspect old-k8s-version-187160] to gather additional debugging logs...
	I1122 00:35:25.747497  204491 cli_runner.go:164] Run: docker network inspect old-k8s-version-187160
	W1122 00:35:25.763840  204491 cli_runner.go:211] docker network inspect old-k8s-version-187160 returned with exit code 1
	I1122 00:35:25.763887  204491 network_create.go:287] error running [docker network inspect old-k8s-version-187160]: docker network inspect old-k8s-version-187160: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-187160 not found
	I1122 00:35:25.763900  204491 network_create.go:289] output of [docker network inspect old-k8s-version-187160]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-187160 not found
	
	** /stderr **
	I1122 00:35:25.764015  204491 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:35:25.780864  204491 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc891483483f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:f5:f5:5e:a2:12} reservation:<nil>}
	I1122 00:35:25.781241  204491 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dcada94e63da IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:bf:ad:c8:04:5e} reservation:<nil>}
	I1122 00:35:25.781562  204491 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7ab25f17f29c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:32:b1:2f:5f:ec} reservation:<nil>}
	I1122 00:35:25.781803  204491 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cb9cfba5857 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:53:4c:fc:1c:97} reservation:<nil>}
	I1122 00:35:25.782260  204491 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3a10}
	I1122 00:35:25.782291  204491 network_create.go:124] attempt to create docker network old-k8s-version-187160 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:35:25.782360  204491 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-187160 old-k8s-version-187160
	I1122 00:35:25.842320  204491 network_create.go:108] docker network old-k8s-version-187160 192.168.85.0/24 created
	I1122 00:35:25.842355  204491 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-187160" container
	I1122 00:35:25.842428  204491 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:35:25.861485  204491 cli_runner.go:164] Run: docker volume create old-k8s-version-187160 --label name.minikube.sigs.k8s.io=old-k8s-version-187160 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:35:25.879635  204491 oci.go:103] Successfully created a docker volume old-k8s-version-187160
	I1122 00:35:25.879743  204491 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-187160-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-187160 --entrypoint /usr/bin/test -v old-k8s-version-187160:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:35:26.435658  204491 oci.go:107] Successfully prepared a docker volume old-k8s-version-187160
	I1122 00:35:26.435728  204491 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1122 00:35:26.435747  204491 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:35:26.435823  204491 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-187160:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:35:31.696637  204491 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-187160:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (5.260759946s)
	I1122 00:35:31.696672  204491 kic.go:203] duration metric: took 5.260921646s to extract preloaded images to volume ...
	W1122 00:35:31.696817  204491 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:35:31.696931  204491 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:35:31.753972  204491 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-187160 --name old-k8s-version-187160 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-187160 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-187160 --network old-k8s-version-187160 --ip 192.168.85.2 --volume old-k8s-version-187160:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:35:32.113795  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Running}}
	I1122 00:35:32.148430  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:35:32.171721  204491 cli_runner.go:164] Run: docker exec old-k8s-version-187160 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:35:32.239681  204491 oci.go:144] the created container "old-k8s-version-187160" has a running status.
	I1122 00:35:32.239723  204491 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa...
	I1122 00:35:32.387029  204491 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:35:32.412458  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:35:32.436918  204491 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:35:32.436940  204491 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-187160 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:35:32.514133  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:35:32.533241  204491 machine.go:94] provisionDockerMachine start ...
	I1122 00:35:32.533325  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:32.550125  204491 main.go:143] libmachine: Using SSH client type: native
	I1122 00:35:32.550468  204491 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1122 00:35:32.550477  204491 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:35:32.551138  204491 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:35:35.695199  204491 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-187160
	
	I1122 00:35:35.695223  204491 ubuntu.go:182] provisioning hostname "old-k8s-version-187160"
	I1122 00:35:35.695289  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:35.713198  204491 main.go:143] libmachine: Using SSH client type: native
	I1122 00:35:35.713512  204491 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1122 00:35:35.713524  204491 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-187160 && echo "old-k8s-version-187160" | sudo tee /etc/hostname
	I1122 00:35:35.869138  204491 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-187160
	
	I1122 00:35:35.869219  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:35.895158  204491 main.go:143] libmachine: Using SSH client type: native
	I1122 00:35:35.895475  204491 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1122 00:35:35.895503  204491 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-187160' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-187160/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-187160' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:35:36.040218  204491 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:35:36.040314  204491 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-2332/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-2332/.minikube}
	I1122 00:35:36.040363  204491 ubuntu.go:190] setting up certificates
	I1122 00:35:36.040400  204491 provision.go:84] configureAuth start
	I1122 00:35:36.040487  204491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187160
	I1122 00:35:36.058556  204491 provision.go:143] copyHostCerts
	I1122 00:35:36.058636  204491 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem, removing ...
	I1122 00:35:36.058646  204491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem
	I1122 00:35:36.058727  204491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem (1078 bytes)
	I1122 00:35:36.058834  204491 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem, removing ...
	I1122 00:35:36.058840  204491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem
	I1122 00:35:36.058868  204491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem (1123 bytes)
	I1122 00:35:36.058922  204491 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem, removing ...
	I1122 00:35:36.058927  204491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem
	I1122 00:35:36.058953  204491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem (1675 bytes)
	I1122 00:35:36.059005  204491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-187160 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-187160]
	I1122 00:35:36.348541  204491 provision.go:177] copyRemoteCerts
	I1122 00:35:36.348614  204491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:35:36.348667  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.369601  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:36.471378  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1122 00:35:36.490458  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:35:36.508270  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:35:36.526938  204491 provision.go:87] duration metric: took 486.502541ms to configureAuth
	I1122 00:35:36.527021  204491 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:35:36.527237  204491 config.go:182] Loaded profile config "old-k8s-version-187160": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1122 00:35:36.527255  204491 machine.go:97] duration metric: took 3.993997448s to provisionDockerMachine
	I1122 00:35:36.527264  204491 client.go:176] duration metric: took 10.797328966s to LocalClient.Create
	I1122 00:35:36.527293  204491 start.go:167] duration metric: took 10.797411199s to libmachine.API.Create "old-k8s-version-187160"
	I1122 00:35:36.527306  204491 start.go:293] postStartSetup for "old-k8s-version-187160" (driver="docker")
	I1122 00:35:36.527316  204491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:35:36.527381  204491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:35:36.527430  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.545627  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:36.647923  204491 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:35:36.651471  204491 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:35:36.651502  204491 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:35:36.651514  204491 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/addons for local assets ...
	I1122 00:35:36.651600  204491 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/files for local assets ...
	I1122 00:35:36.651680  204491 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem -> 56232.pem in /etc/ssl/certs
	I1122 00:35:36.651791  204491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:35:36.659452  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:35:36.677628  204491 start.go:296] duration metric: took 150.307453ms for postStartSetup
	I1122 00:35:36.678009  204491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187160
	I1122 00:35:36.695200  204491 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/config.json ...
	I1122 00:35:36.695488  204491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:35:36.695533  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.712347  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:36.812625  204491 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:35:36.817876  204491 start.go:128] duration metric: took 11.091643714s to createHost
	I1122 00:35:36.817903  204491 start.go:83] releasing machines lock for "old-k8s-version-187160", held for 11.091773776s
	I1122 00:35:36.817972  204491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187160
	I1122 00:35:36.834925  204491 ssh_runner.go:195] Run: cat /version.json
	I1122 00:35:36.834937  204491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:35:36.834978  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.834993  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.857313  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:36.858772  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:37.058580  204491 ssh_runner.go:195] Run: systemctl --version
	I1122 00:35:37.065466  204491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:35:37.070084  204491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:35:37.070153  204491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:35:37.102719  204491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:35:37.102749  204491 start.go:496] detecting cgroup driver to use...
	I1122 00:35:37.102783  204491 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:35:37.102844  204491 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:35:37.120639  204491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:35:37.134362  204491 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:35:37.134421  204491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:35:37.154417  204491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:35:37.175282  204491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:35:37.300828  204491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:35:37.432936  204491 docker.go:234] disabling docker service ...
	I1122 00:35:37.433033  204491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:35:37.460467  204491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:35:37.474617  204491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:35:37.589678  204491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:35:37.706875  204491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:35:37.721264  204491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:35:37.736772  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1122 00:35:37.745143  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:35:37.754554  204491 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1122 00:35:37.754677  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1122 00:35:37.764278  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:35:37.773387  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:35:37.782068  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:35:37.791172  204491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:35:37.799462  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:35:37.809187  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:35:37.819591  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:35:37.829972  204491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:35:37.837818  204491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:35:37.845123  204491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:35:37.972336  204491 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:35:38.106484  204491 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:35:38.106607  204491 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:35:38.110682  204491 start.go:564] Will wait 60s for crictl version
	I1122 00:35:38.110811  204491 ssh_runner.go:195] Run: which crictl
	I1122 00:35:38.114578  204491 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:35:38.143349  204491 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:35:38.143510  204491 ssh_runner.go:195] Run: containerd --version
	I1122 00:35:38.169250  204491 ssh_runner.go:195] Run: containerd --version
	I1122 00:35:38.195498  204491 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1122 00:35:38.198689  204491 cli_runner.go:164] Run: docker network inspect old-k8s-version-187160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:35:38.215899  204491 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:35:38.220352  204491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:35:38.230933  204491 kubeadm.go:884] updating cluster {Name:old-k8s-version-187160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:35:38.231054  204491 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1122 00:35:38.231125  204491 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:35:38.256948  204491 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:35:38.256971  204491 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:35:38.257034  204491 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:35:38.286540  204491 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:35:38.286568  204491 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:35:38.286577  204491 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1122 00:35:38.286682  204491 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-187160 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:35:38.286756  204491 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:35:38.313643  204491 cni.go:84] Creating CNI manager for ""
	I1122 00:35:38.313668  204491 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:35:38.313689  204491 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:35:38.313711  204491 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-187160 NodeName:old-k8s-version-187160 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:35:38.313845  204491 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-187160"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:35:38.313918  204491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1122 00:35:38.322064  204491 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:35:38.322135  204491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:35:38.330790  204491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1122 00:35:38.348355  204491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:35:38.363167  204491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1122 00:35:38.376556  204491 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:35:38.380569  204491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:35:38.391056  204491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:35:38.526025  204491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:35:38.545711  204491 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160 for IP: 192.168.85.2
	I1122 00:35:38.545734  204491 certs.go:195] generating shared ca certs ...
	I1122 00:35:38.545751  204491 certs.go:227] acquiring lock for ca certs: {Name:mk348a892ec4309987f6c81ee1acef4884ca62db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:38.545938  204491 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key
	I1122 00:35:38.545988  204491 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key
	I1122 00:35:38.546003  204491 certs.go:257] generating profile certs ...
	I1122 00:35:38.546061  204491 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.key
	I1122 00:35:38.546079  204491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt with IP's: []
	I1122 00:35:38.803940  204491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt ...
	I1122 00:35:38.803973  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: {Name:mk95462d156959a6f9b819420692e4652b18d9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:38.804179  204491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.key ...
	I1122 00:35:38.804193  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.key: {Name:mk162292d187fc773689134dda95a4cf7124ec7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:38.804293  204491 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key.1112e05c
	I1122 00:35:38.804315  204491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt.1112e05c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1122 00:35:39.153162  204491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt.1112e05c ...
	I1122 00:35:39.153195  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt.1112e05c: {Name:mka276df0b08ad9c5f26731f4b9f3e54b782777f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:39.153401  204491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key.1112e05c ...
	I1122 00:35:39.153416  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key.1112e05c: {Name:mk9054d49d862aec16e8a3fc2afe3d910c07fd2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:39.153506  204491 certs.go:382] copying /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt.1112e05c -> /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt
	I1122 00:35:39.153592  204491 certs.go:386] copying /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key.1112e05c -> /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key
	I1122 00:35:39.153655  204491 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.key
	I1122 00:35:39.153677  204491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.crt with IP's: []
	I1122 00:35:39.563837  204491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.crt ...
	I1122 00:35:39.563873  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.crt: {Name:mk9c0df39f4f112052b7beaf5fa971f1bf609226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:39.564069  204491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.key ...
	I1122 00:35:39.564088  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.key: {Name:mk7e4d396efea8259e6e7217a8413e2f5c662eb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:39.564286  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem (1338 bytes)
	W1122 00:35:39.564334  204491 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623_empty.pem, impossibly tiny 0 bytes
	I1122 00:35:39.564349  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:35:39.564376  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:35:39.564404  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:35:39.564428  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem (1675 bytes)
	I1122 00:35:39.564481  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:35:39.565109  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:35:39.584667  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:35:39.603030  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:35:39.630817  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:35:39.649557  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1122 00:35:39.668869  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:35:39.688997  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:35:39.706802  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:35:39.724608  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem --> /usr/share/ca-certificates/5623.pem (1338 bytes)
	I1122 00:35:39.743500  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /usr/share/ca-certificates/56232.pem (1708 bytes)
	I1122 00:35:39.762127  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:35:39.780891  204491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:35:39.794217  204491 ssh_runner.go:195] Run: openssl version
	I1122 00:35:39.800660  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56232.pem && ln -fs /usr/share/ca-certificates/56232.pem /etc/ssl/certs/56232.pem"
	I1122 00:35:39.810417  204491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56232.pem
	I1122 00:35:39.814201  204491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/56232.pem
	I1122 00:35:39.814271  204491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56232.pem
	I1122 00:35:39.857681  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56232.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:35:39.868835  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:35:39.877140  204491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:35:39.881319  204491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:35:39.881438  204491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:35:39.930136  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:35:39.938568  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5623.pem && ln -fs /usr/share/ca-certificates/5623.pem /etc/ssl/certs/5623.pem"
	I1122 00:35:39.948496  204491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5623.pem
	I1122 00:35:39.952490  204491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/5623.pem
	I1122 00:35:39.952578  204491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5623.pem
	I1122 00:35:39.994578  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5623.pem /etc/ssl/certs/51391683.0"
	I1122 00:35:40.027794  204491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:35:40.032651  204491 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:35:40.032738  204491 kubeadm.go:401] StartCluster: {Name:old-k8s-version-187160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:35:40.032812  204491 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:35:40.032877  204491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:35:40.076761  204491 cri.go:89] found id: ""
	I1122 00:35:40.076845  204491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:35:40.090718  204491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:35:40.100082  204491 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:35:40.100159  204491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:35:40.112504  204491 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:35:40.112529  204491 kubeadm.go:158] found existing configuration files:
	
	I1122 00:35:40.112593  204491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:35:40.124171  204491 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:35:40.124252  204491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:35:40.132777  204491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:35:40.143132  204491 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:35:40.143196  204491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:35:40.151096  204491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:35:40.159251  204491 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:35:40.159333  204491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:35:40.167381  204491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:35:40.175820  204491 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:35:40.175919  204491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:35:40.183634  204491 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:35:40.228995  204491 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1122 00:35:40.229354  204491 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:35:40.270119  204491 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:35:40.270223  204491 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:35:40.270286  204491 kubeadm.go:319] OS: Linux
	I1122 00:35:40.270358  204491 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:35:40.270431  204491 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:35:40.270506  204491 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:35:40.270583  204491 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:35:40.270653  204491 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:35:40.270725  204491 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:35:40.270800  204491 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:35:40.270872  204491 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:35:40.270941  204491 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:35:40.363742  204491 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:35:40.363899  204491 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:35:40.364030  204491 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1122 00:35:40.513095  204491 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:35:40.516289  204491 out.go:252]   - Generating certificates and keys ...
	I1122 00:35:40.516432  204491 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:35:40.516528  204491 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:35:40.711317  204491 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:35:40.978650  204491 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:35:41.429330  204491 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:35:41.658530  204491 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:35:41.984642  204491 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:35:41.985033  204491 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-187160] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:35:42.248353  204491 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:35:42.249067  204491 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-187160] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:35:42.615803  204491 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:35:43.335579  204491 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:35:43.815188  204491 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:35:43.815282  204491 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:35:44.402761  204491 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:35:44.753906  204491 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:35:45.167383  204491 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:35:45.452761  204491 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:35:45.453364  204491 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:35:45.456015  204491 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:35:45.459671  204491 out.go:252]   - Booting up control plane ...
	I1122 00:35:45.459771  204491 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:35:45.459848  204491 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:35:45.459914  204491 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:35:45.476046  204491 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:35:45.477240  204491 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:35:45.477361  204491 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:35:45.612547  204491 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1122 00:35:54.117999  204491 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.505578 seconds
	I1122 00:35:54.118473  204491 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:35:54.139082  204491 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:35:54.670397  204491 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:35:54.670862  204491 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-187160 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:35:55.185028  204491 kubeadm.go:319] [bootstrap-token] Using token: to2lwb.i9cb3jhv4v448q3k
	I1122 00:35:55.188081  204491 out.go:252]   - Configuring RBAC rules ...
	I1122 00:35:55.188220  204491 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:35:55.195758  204491 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:35:55.205333  204491 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:35:55.209737  204491 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:35:55.214290  204491 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:35:55.218575  204491 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:35:55.236174  204491 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:35:55.499266  204491 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:35:55.616225  204491 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:35:55.617959  204491 kubeadm.go:319] 
	I1122 00:35:55.618033  204491 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:35:55.618040  204491 kubeadm.go:319] 
	I1122 00:35:55.618117  204491 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:35:55.618121  204491 kubeadm.go:319] 
	I1122 00:35:55.618145  204491 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:35:55.618693  204491 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:35:55.618750  204491 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:35:55.618755  204491 kubeadm.go:319] 
	I1122 00:35:55.618823  204491 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:35:55.618828  204491 kubeadm.go:319] 
	I1122 00:35:55.618876  204491 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:35:55.618880  204491 kubeadm.go:319] 
	I1122 00:35:55.618932  204491 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:35:55.619007  204491 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:35:55.619075  204491 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:35:55.619079  204491 kubeadm.go:319] 
	I1122 00:35:55.619393  204491 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:35:55.619477  204491 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:35:55.619482  204491 kubeadm.go:319] 
	I1122 00:35:55.619814  204491 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token to2lwb.i9cb3jhv4v448q3k \
	I1122 00:35:55.619924  204491 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad26553e08ef3801627a7166e0bb20bf24427585c6187a46d63e60c79d4d84c \
	I1122 00:35:55.620145  204491 kubeadm.go:319] 	--control-plane 
	I1122 00:35:55.620154  204491 kubeadm.go:319] 
	I1122 00:35:55.620453  204491 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:35:55.620509  204491 kubeadm.go:319] 
	I1122 00:35:55.620799  204491 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token to2lwb.i9cb3jhv4v448q3k \
	I1122 00:35:55.621154  204491 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad26553e08ef3801627a7166e0bb20bf24427585c6187a46d63e60c79d4d84c 
	I1122 00:35:55.628501  204491 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:35:55.628624  204491 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:35:55.628655  204491 cni.go:84] Creating CNI manager for ""
	I1122 00:35:55.628665  204491 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:35:55.632117  204491 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:35:55.635060  204491 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:35:55.639896  204491 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1122 00:35:55.639971  204491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:35:55.669508  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:35:56.680357  204491 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.010766434s)
	I1122 00:35:56.680400  204491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:35:56.680512  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:56.680587  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-187160 minikube.k8s.io/updated_at=2025_11_22T00_35_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=old-k8s-version-187160 minikube.k8s.io/primary=true
	I1122 00:35:56.912253  204491 ops.go:34] apiserver oom_adj: -16
	I1122 00:35:56.912369  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:57.412908  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:57.912481  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:58.412668  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:58.912654  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:59.413213  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:59.912565  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:00.412648  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:00.913361  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:01.412704  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:01.912815  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:02.412478  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:02.913389  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:03.413307  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:03.912649  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:04.413090  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:04.912826  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:05.413328  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:05.913130  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:06.412960  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:06.912510  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:07.412753  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:07.912460  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:08.412631  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:08.527706  204491 kubeadm.go:1114] duration metric: took 11.847239896s to wait for elevateKubeSystemPrivileges
	I1122 00:36:08.527738  204491 kubeadm.go:403] duration metric: took 28.495021982s to StartCluster
	I1122 00:36:08.527756  204491 settings.go:142] acquiring lock: {Name:mk5b79634916fd13f05f4c848ff3e8b07cafa39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:36:08.527819  204491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:36:08.528755  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:36:08.528975  204491 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:36:08.529135  204491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:36:08.529411  204491 config.go:182] Loaded profile config "old-k8s-version-187160": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1122 00:36:08.529451  204491 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:36:08.529508  204491 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-187160"
	I1122 00:36:08.529521  204491 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-187160"
	I1122 00:36:08.529542  204491 host.go:66] Checking if "old-k8s-version-187160" exists ...
	I1122 00:36:08.530290  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:36:08.530298  204491 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-187160"
	I1122 00:36:08.530316  204491 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-187160"
	I1122 00:36:08.530614  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:36:08.532233  204491 out.go:179] * Verifying Kubernetes components...
	I1122 00:36:08.535145  204491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:36:08.586966  204491 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:36:08.587190  204491 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-187160"
	I1122 00:36:08.587223  204491 host.go:66] Checking if "old-k8s-version-187160" exists ...
	I1122 00:36:08.587683  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:36:08.591301  204491 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:36:08.591402  204491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:36:08.591477  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:36:08.635204  204491 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:36:08.635226  204491 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:36:08.635293  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:36:08.651636  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:36:08.678324  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:36:08.944863  204491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:36:08.950679  204491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:36:08.950805  204491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:36:08.978464  204491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:36:09.986874  204491 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.036045446s)
	I1122 00:36:09.987832  204491 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-187160" to be "Ready" ...
	I1122 00:36:09.988180  204491 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.037475899s)
	I1122 00:36:09.988235  204491 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:36:10.240303  204491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261804778s)
	I1122 00:36:10.243645  204491 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1122 00:36:10.246631  204491 addons.go:530] duration metric: took 1.717174385s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1122 00:36:10.493160  204491 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-187160" context rescaled to 1 replicas
	W1122 00:36:11.991518  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	W1122 00:36:14.491462  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	W1122 00:36:16.991034  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	W1122 00:36:18.991616  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	W1122 00:36:21.491517  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	I1122 00:36:21.991333  204491 node_ready.go:49] node "old-k8s-version-187160" is "Ready"
	I1122 00:36:21.991365  204491 node_ready.go:38] duration metric: took 12.003472888s for node "old-k8s-version-187160" to be "Ready" ...
	I1122 00:36:21.991381  204491 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:36:21.991444  204491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:36:22.006058  204491 api_server.go:72] duration metric: took 13.477048085s to wait for apiserver process to appear ...
	I1122 00:36:22.006087  204491 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:36:22.006108  204491 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:36:22.016962  204491 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:36:22.018742  204491 api_server.go:141] control plane version: v1.28.0
	I1122 00:36:22.018771  204491 api_server.go:131] duration metric: took 12.676666ms to wait for apiserver health ...
	I1122 00:36:22.018781  204491 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:36:22.024300  204491 system_pods.go:59] 8 kube-system pods found
	I1122 00:36:22.024339  204491 system_pods.go:61] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.024346  204491 system_pods.go:61] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.024352  204491 system_pods.go:61] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.024356  204491 system_pods.go:61] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.024364  204491 system_pods.go:61] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.024368  204491 system_pods.go:61] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.024372  204491 system_pods.go:61] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.024377  204491 system_pods.go:61] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.024383  204491 system_pods.go:74] duration metric: took 5.596781ms to wait for pod list to return data ...
	I1122 00:36:22.024392  204491 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:36:22.027545  204491 default_sa.go:45] found service account: "default"
	I1122 00:36:22.027626  204491 default_sa.go:55] duration metric: took 3.227688ms for default service account to be created ...
	I1122 00:36:22.027638  204491 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:36:22.032182  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:22.032219  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.032226  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.032233  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.032237  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.032242  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.032246  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.032250  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.032258  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.032287  204491 retry.go:31] will retry after 227.562193ms: missing components: kube-dns
	I1122 00:36:22.265424  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:22.265462  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.265472  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.265478  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.265483  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.265489  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.265493  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.265497  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.265504  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.265524  204491 retry.go:31] will retry after 240.91922ms: missing components: kube-dns
	I1122 00:36:22.510867  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:22.510912  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.510921  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.510927  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.510933  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.510938  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.510941  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.510946  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.510951  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.510973  204491 retry.go:31] will retry after 348.682328ms: missing components: kube-dns
	I1122 00:36:22.864222  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:22.864266  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.864274  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.864281  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.864286  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.864291  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.864295  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.864300  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.864306  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.864321  204491 retry.go:31] will retry after 425.9451ms: missing components: kube-dns
	I1122 00:36:23.295106  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:23.295135  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Running
	I1122 00:36:23.295143  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:23.295148  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:23.295152  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:23.295157  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:23.295161  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:23.295165  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:23.295169  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Running
	I1122 00:36:23.295176  204491 system_pods.go:126] duration metric: took 1.267532498s to wait for k8s-apps to be running ...
	I1122 00:36:23.295183  204491 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:36:23.295236  204491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:36:23.308784  204491 system_svc.go:56] duration metric: took 13.592552ms WaitForService to wait for kubelet
	I1122 00:36:23.308825  204491 kubeadm.go:587] duration metric: took 14.779827699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:36:23.308852  204491 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:36:23.312011  204491 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:36:23.312046  204491 node_conditions.go:123] node cpu capacity is 2
	I1122 00:36:23.312059  204491 node_conditions.go:105] duration metric: took 3.201235ms to run NodePressure ...
	I1122 00:36:23.312071  204491 start.go:242] waiting for startup goroutines ...
	I1122 00:36:23.312086  204491 start.go:247] waiting for cluster config update ...
	I1122 00:36:23.312101  204491 start.go:256] writing updated cluster config ...
	I1122 00:36:23.312384  204491 ssh_runner.go:195] Run: rm -f paused
	I1122 00:36:23.316108  204491 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:36:23.320553  204491 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mrsrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.325648  204491 pod_ready.go:94] pod "coredns-5dd5756b68-mrsrv" is "Ready"
	I1122 00:36:23.325674  204491 pod_ready.go:86] duration metric: took 5.095957ms for pod "coredns-5dd5756b68-mrsrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.328915  204491 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.334037  204491 pod_ready.go:94] pod "etcd-old-k8s-version-187160" is "Ready"
	I1122 00:36:23.334067  204491 pod_ready.go:86] duration metric: took 5.116954ms for pod "etcd-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.337625  204491 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.356839  204491 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-187160" is "Ready"
	I1122 00:36:23.356876  204491 pod_ready.go:86] duration metric: took 19.22459ms for pod "kube-apiserver-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.361446  204491 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.720938  204491 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-187160" is "Ready"
	I1122 00:36:23.720968  204491 pod_ready.go:86] duration metric: took 359.490555ms for pod "kube-controller-manager-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.921107  204491 pod_ready.go:83] waiting for pod "kube-proxy-bmr5t" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:24.320758  204491 pod_ready.go:94] pod "kube-proxy-bmr5t" is "Ready"
	I1122 00:36:24.320787  204491 pod_ready.go:86] duration metric: took 399.655664ms for pod "kube-proxy-bmr5t" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:24.520579  204491 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:24.920820  204491 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-187160" is "Ready"
	I1122 00:36:24.920848  204491 pod_ready.go:86] duration metric: took 400.240328ms for pod "kube-scheduler-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:24.920861  204491 pod_ready.go:40] duration metric: took 1.604716609s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:36:24.979879  204491 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1122 00:36:24.983429  204491 out.go:203] 
	W1122 00:36:24.986583  204491 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:36:24.995477  204491 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:36:24.998398  204491 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-187160" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	494bc3382cb83       1611cd07b61d5       8 seconds ago       Running             busybox                   0                   d35a249623dcf       busybox                                          default
	654eec2541e67       97e04611ad434       14 seconds ago      Running             coredns                   0                   8a99471b5af37       coredns-5dd5756b68-mrsrv                         kube-system
	807c22f672611       ba04bb24b9575       14 seconds ago      Running             storage-provisioner       0                   9cb1da3721961       storage-provisioner                              kube-system
	8cdc46abde6ba       b1a8c6f707935       25 seconds ago      Running             kindnet-cni               0                   692b4826b8541       kindnet-lprzz                                    kube-system
	8e33bd8eeab59       940f54a5bcae9       27 seconds ago      Running             kube-proxy                0                   14060723f1c30       kube-proxy-bmr5t                                 kube-system
	f4bd605783e20       762dce4090c5f       48 seconds ago      Running             kube-scheduler            0                   7720a7ec0099e       kube-scheduler-old-k8s-version-187160            kube-system
	422173de99e2a       46cc66ccc7c19       48 seconds ago      Running             kube-controller-manager   0                   4956ac9800f65       kube-controller-manager-old-k8s-version-187160   kube-system
	4d2a9fc38adb0       9cdd6470f48c8       48 seconds ago      Running             etcd                      0                   5bbc7a9687c9e       etcd-old-k8s-version-187160                      kube-system
	8e8018cdd5ebc       00543d2fe5d71       48 seconds ago      Running             kube-apiserver            0                   51821586bd991       kube-apiserver-old-k8s-version-187160            kube-system
	
	
	==> containerd <==
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.190949694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mrsrv,Uid:98b86160-bc56-4571-a3ac-ebfd93eda042,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a99471b5af37996285fcf9181e4881d81926045a25ef6ae4127c3af1567110b\""
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.200275758Z" level=info msg="CreateContainer within sandbox \"8a99471b5af37996285fcf9181e4881d81926045a25ef6ae4127c3af1567110b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.218592273Z" level=info msg="Container 654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.229282499Z" level=info msg="CreateContainer within sandbox \"8a99471b5af37996285fcf9181e4881d81926045a25ef6ae4127c3af1567110b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222\""
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.230197671Z" level=info msg="StartContainer for \"654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222\""
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.231322709Z" level=info msg="connecting to shim 654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222" address="unix:///run/containerd/s/7e8e622c8ba08e7a15e3b2eb24a2e7f882c657cd7f8e507b49e75e8c8b234d1a" protocol=ttrpc version=3
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.288656314Z" level=info msg="StartContainer for \"807c22f67261185cb1c38e8c47426e487d0218ab5042e337b6019698fe15e361\" returns successfully"
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.360970347Z" level=info msg="StartContainer for \"654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222\" returns successfully"
	Nov 22 00:36:25 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:25.549036859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f6539f6c-3a59-4e72-b903-a218596cb332,Namespace:default,Attempt:0,}"
	Nov 22 00:36:25 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:25.612228523Z" level=info msg="connecting to shim d35a249623dcf4705609d48e3a1ffdb56d8625cae031e61983391e525f34d081" address="unix:///run/containerd/s/12cff00c35647593d05548c7fa195d2bdf00b8303d1fe1bb1c09dbc3effac604" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:36:25 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:25.685871560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f6539f6c-3a59-4e72-b903-a218596cb332,Namespace:default,Attempt:0,} returns sandbox id \"d35a249623dcf4705609d48e3a1ffdb56d8625cae031e61983391e525f34d081\""
	Nov 22 00:36:25 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:25.688018486Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.826931948Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.828940345Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937185"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.831283919Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.834691884Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.835352013Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.147291007s"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.835469963Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.837431599Z" level=info msg="CreateContainer within sandbox \"d35a249623dcf4705609d48e3a1ffdb56d8625cae031e61983391e525f34d081\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.857207130Z" level=info msg="Container 494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.871093618Z" level=info msg="CreateContainer within sandbox \"d35a249623dcf4705609d48e3a1ffdb56d8625cae031e61983391e525f34d081\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f\""
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.872286652Z" level=info msg="StartContainer for \"494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f\""
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.873692350Z" level=info msg="connecting to shim 494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f" address="unix:///run/containerd/s/12cff00c35647593d05548c7fa195d2bdf00b8303d1fe1bb1c09dbc3effac604" protocol=ttrpc version=3
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.950649385Z" level=info msg="StartContainer for \"494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f\" returns successfully"
	Nov 22 00:36:35 old-k8s-version-187160 containerd[761]: E1122 00:36:35.395678     761 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43688 - 23032 "HINFO IN 3717724876547105178.1036559960491691526. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021196589s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-187160
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-187160
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-187160
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_35_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:35:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-187160
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:36:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:36:26 +0000   Sat, 22 Nov 2025 00:35:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:36:26 +0000   Sat, 22 Nov 2025 00:35:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:36:26 +0000   Sat, 22 Nov 2025 00:35:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:36:26 +0000   Sat, 22 Nov 2025 00:36:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-187160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                a6732ccb-f376-4b40-84a6-bd1e3603acd7
	  Boot ID:                    4e86741a-5896-4eb6-97ce-70ea8beedc67
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-mrsrv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     28s
	  kube-system                 etcd-old-k8s-version-187160                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         41s
	  kube-system                 kindnet-lprzz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-187160             250m (12%)    0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 kube-controller-manager-old-k8s-version-187160    200m (10%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-proxy-bmr5t                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-187160             100m (5%)     0 (0%)      0 (0%)           0 (0%)         41s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node old-k8s-version-187160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node old-k8s-version-187160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     50s (x7 over 50s)  kubelet          Node old-k8s-version-187160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  50s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s                kubelet          Node old-k8s-version-187160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s                kubelet          Node old-k8s-version-187160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s                kubelet          Node old-k8s-version-187160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  41s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-187160 event: Registered Node old-k8s-version-187160 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-187160 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.017121] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498034] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037542] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808656] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.648915] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 23:58] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=00000000f9ea0775{9P.session} n=0000000035823f74
	[  +0.001177] FS-Cache: O-key=[10] '34323935353131333738'
	[  +0.000819] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000f9ea0775{9P.session} n=00000000dbfd8515
	[  +0.001154] FS-Cache: N-key=[10] '34323935353131333738'
	[Nov22 00:00] hrtimer: interrupt took 9958927 ns
	
	
	==> etcd [4d2a9fc38adb0b31ca51bf2e68e8a59fe482fcd2b93af068c1db236c50e65e57] <==
	{"level":"info","ts":"2025-11-22T00:35:47.864157Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-22T00:35:47.869484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-22T00:35:47.873465Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-22T00:35:47.869589Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:35:47.87352Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:35:47.873722Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-22T00:35:47.873744Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-22T00:35:48.530385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-22T00:35:48.530665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-22T00:35:48.530804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-22T00:35:48.530891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-22T00:35:48.530969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:35:48.531058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-22T00:35:48.531128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:35:48.532649Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:35:48.533919Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-187160 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:35:48.534231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:35:48.534769Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:35:48.536025Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:35:48.536154Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:35:48.536275Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:35:48.539923Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:35:48.540183Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:35:48.540261Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-22T00:35:48.557717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 00:36:36 up  1:18,  0 user,  load average: 3.76, 3.84, 2.86
	Linux old-k8s-version-187160 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cdc46abde6bad9104481ebcd97fd8584433d6596a115bb7ac80832f48229c0d] <==
	I1122 00:36:11.316025       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:36:11.316279       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:36:11.316431       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:36:11.316442       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:36:11.316456       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:36:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:36:11.612539       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:36:11.612562       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:36:11.612570       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:36:11.613331       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:36:11.813277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:36:11.813307       1 metrics.go:72] Registering metrics
	I1122 00:36:11.813408       1 controller.go:711] "Syncing nftables rules"
	I1122 00:36:21.620218       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:36:21.620282       1 main.go:301] handling current node
	I1122 00:36:31.613593       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:36:31.613631       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e8018cdd5ebcb3cd027b426852cef360ea1f4e64ace348b361ff36ccd368012] <==
	I1122 00:35:52.259918       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1122 00:35:52.260106       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1122 00:35:52.260569       1 shared_informer.go:318] Caches are synced for configmaps
	I1122 00:35:52.261451       1 controller.go:624] quota admission added evaluator for: namespaces
	I1122 00:35:52.266423       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:35:52.266592       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:35:52.266734       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:35:52.266876       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:35:52.266967       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:35:52.293066       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:35:52.978061       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:35:52.986815       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:35:52.986841       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:35:53.822664       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:35:53.879991       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:35:53.983791       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:35:53.991186       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:35:53.992448       1 controller.go:624] quota admission added evaluator for: endpoints
	I1122 00:35:53.999040       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:35:54.214195       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:35:55.482369       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:35:55.497744       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:35:55.520177       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1122 00:36:08.135231       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:36:08.187027       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [422173de99e2aced5a73d55aeb7e56b0728bc3681c11733622c38f6d0425ecd3] <==
	I1122 00:36:08.131017       1 shared_informer.go:318] Caches are synced for job
	I1122 00:36:08.147818       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:36:08.172229       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:36:08.178101       1 shared_informer.go:318] Caches are synced for cronjob
	I1122 00:36:08.190358       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lprzz"
	I1122 00:36:08.221984       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bmr5t"
	I1122 00:36:08.232130       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1122 00:36:08.315039       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-p9rvg"
	I1122 00:36:08.340943       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mrsrv"
	I1122 00:36:08.369701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="156.223576ms"
	I1122 00:36:08.379883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.079149ms"
	I1122 00:36:08.380490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.036µs"
	I1122 00:36:08.627042       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:36:08.627075       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:36:08.627148       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:36:10.024745       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1122 00:36:10.054530       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-p9rvg"
	I1122 00:36:10.066916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.969237ms"
	I1122 00:36:10.077020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.054493ms"
	I1122 00:36:10.077099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.464µs"
	I1122 00:36:21.693144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.262µs"
	I1122 00:36:21.721559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.552µs"
	I1122 00:36:22.906222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.230605ms"
	I1122 00:36:22.906582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.003µs"
	I1122 00:36:23.031191       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [8e33bd8eeab59d54aa5b42af6c70b616bdbf7c411e21a37367f8687511b9cbf6] <==
	I1122 00:36:09.223209       1 server_others.go:69] "Using iptables proxy"
	I1122 00:36:09.244053       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1122 00:36:09.307007       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:36:09.309152       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:36:09.309196       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:36:09.309205       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:36:09.309229       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:36:09.309429       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:36:09.309444       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:36:09.310333       1 config.go:188] "Starting service config controller"
	I1122 00:36:09.310356       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:36:09.310374       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:36:09.310377       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:36:09.313015       1 config.go:315] "Starting node config controller"
	I1122 00:36:09.313030       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:36:09.411178       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1122 00:36:09.411227       1 shared_informer.go:318] Caches are synced for service config
	I1122 00:36:09.413934       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f4bd605783e2022ce18c699001040087e5badf067b4e0004a50ec4c353329100] <==
	I1122 00:35:51.229184       1 serving.go:348] Generated self-signed cert in-memory
	I1122 00:35:54.167904       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1122 00:35:54.168129       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:35:54.173851       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1122 00:35:54.173999       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1122 00:35:54.174229       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1122 00:35:54.174045       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:35:54.181808       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1122 00:35:54.174056       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:35:54.183738       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1122 00:35:54.174069       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1122 00:35:54.274685       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1122 00:35:54.284155       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1122 00:35:54.285311       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.146939    1537 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.149363    1537 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.250067    1537 topology_manager.go:215] "Topology Admit Handler" podUID="5aba37af-f297-48d8-bc0b-d368ae96d525" podNamespace="kube-system" podName="kindnet-lprzz"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.256574    1537 topology_manager.go:215] "Topology Admit Handler" podUID="dffeabf6-7d14-473d-a908-1995469b8249" podNamespace="kube-system" podName="kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318270    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tdg\" (UniqueName: \"kubernetes.io/projected/dffeabf6-7d14-473d-a908-1995469b8249-kube-api-access-r4tdg\") pod \"kube-proxy-bmr5t\" (UID: \"dffeabf6-7d14-473d-a908-1995469b8249\") " pod="kube-system/kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318333    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5aba37af-f297-48d8-bc0b-d368ae96d525-cni-cfg\") pod \"kindnet-lprzz\" (UID: \"5aba37af-f297-48d8-bc0b-d368ae96d525\") " pod="kube-system/kindnet-lprzz"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318362    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dffeabf6-7d14-473d-a908-1995469b8249-lib-modules\") pod \"kube-proxy-bmr5t\" (UID: \"dffeabf6-7d14-473d-a908-1995469b8249\") " pod="kube-system/kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318389    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5aba37af-f297-48d8-bc0b-d368ae96d525-lib-modules\") pod \"kindnet-lprzz\" (UID: \"5aba37af-f297-48d8-bc0b-d368ae96d525\") " pod="kube-system/kindnet-lprzz"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318411    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dffeabf6-7d14-473d-a908-1995469b8249-kube-proxy\") pod \"kube-proxy-bmr5t\" (UID: \"dffeabf6-7d14-473d-a908-1995469b8249\") " pod="kube-system/kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318434    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dffeabf6-7d14-473d-a908-1995469b8249-xtables-lock\") pod \"kube-proxy-bmr5t\" (UID: \"dffeabf6-7d14-473d-a908-1995469b8249\") " pod="kube-system/kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318457    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5aba37af-f297-48d8-bc0b-d368ae96d525-xtables-lock\") pod \"kindnet-lprzz\" (UID: \"5aba37af-f297-48d8-bc0b-d368ae96d525\") " pod="kube-system/kindnet-lprzz"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318481    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wksn9\" (UniqueName: \"kubernetes.io/projected/5aba37af-f297-48d8-bc0b-d368ae96d525-kube-api-access-wksn9\") pod \"kindnet-lprzz\" (UID: \"5aba37af-f297-48d8-bc0b-d368ae96d525\") " pod="kube-system/kindnet-lprzz"
	Nov 22 00:36:09 old-k8s-version-187160 kubelet[1537]: I1122 00:36:09.837710    1537 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bmr5t" podStartSLOduration=1.8375463779999999 podCreationTimestamp="2025-11-22 00:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:36:09.837193356 +0000 UTC m=+14.395591777" watchObservedRunningTime="2025-11-22 00:36:09.837546378 +0000 UTC m=+14.395944807"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.636202    1537 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.673663    1537 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lprzz" podStartSLOduration=11.549749383 podCreationTimestamp="2025-11-22 00:36:08 +0000 UTC" firstStartedPulling="2025-11-22 00:36:08.937197424 +0000 UTC m=+13.495595845" lastFinishedPulling="2025-11-22 00:36:11.061029686 +0000 UTC m=+15.619428107" observedRunningTime="2025-11-22 00:36:11.843836409 +0000 UTC m=+16.402234830" watchObservedRunningTime="2025-11-22 00:36:21.673581645 +0000 UTC m=+26.231980074"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.674153    1537 topology_manager.go:215] "Topology Admit Handler" podUID="a3bd5eb1-a002-4b61-8bd6-5caabe4bf543" podNamespace="kube-system" podName="storage-provisioner"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.682473    1537 topology_manager.go:215] "Topology Admit Handler" podUID="98b86160-bc56-4571-a3ac-ebfd93eda042" podNamespace="kube-system" podName="coredns-5dd5756b68-mrsrv"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.718520    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7j69\" (UniqueName: \"kubernetes.io/projected/a3bd5eb1-a002-4b61-8bd6-5caabe4bf543-kube-api-access-v7j69\") pod \"storage-provisioner\" (UID: \"a3bd5eb1-a002-4b61-8bd6-5caabe4bf543\") " pod="kube-system/storage-provisioner"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.718757    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a3bd5eb1-a002-4b61-8bd6-5caabe4bf543-tmp\") pod \"storage-provisioner\" (UID: \"a3bd5eb1-a002-4b61-8bd6-5caabe4bf543\") " pod="kube-system/storage-provisioner"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.718938    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbh7\" (UniqueName: \"kubernetes.io/projected/98b86160-bc56-4571-a3ac-ebfd93eda042-kube-api-access-9rbh7\") pod \"coredns-5dd5756b68-mrsrv\" (UID: \"98b86160-bc56-4571-a3ac-ebfd93eda042\") " pod="kube-system/coredns-5dd5756b68-mrsrv"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.719128    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98b86160-bc56-4571-a3ac-ebfd93eda042-config-volume\") pod \"coredns-5dd5756b68-mrsrv\" (UID: \"98b86160-bc56-4571-a3ac-ebfd93eda042\") " pod="kube-system/coredns-5dd5756b68-mrsrv"
	Nov 22 00:36:22 old-k8s-version-187160 kubelet[1537]: I1122 00:36:22.870118    1537 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.870074673 podCreationTimestamp="2025-11-22 00:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:36:22.869831076 +0000 UTC m=+27.428229505" watchObservedRunningTime="2025-11-22 00:36:22.870074673 +0000 UTC m=+27.428473102"
	Nov 22 00:36:25 old-k8s-version-187160 kubelet[1537]: I1122 00:36:25.241313    1537 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mrsrv" podStartSLOduration=17.241260829 podCreationTimestamp="2025-11-22 00:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:36:22.890080452 +0000 UTC m=+27.448478873" watchObservedRunningTime="2025-11-22 00:36:25.241260829 +0000 UTC m=+29.799659258"
	Nov 22 00:36:25 old-k8s-version-187160 kubelet[1537]: I1122 00:36:25.242395    1537 topology_manager.go:215] "Topology Admit Handler" podUID="f6539f6c-3a59-4e72-b903-a218596cb332" podNamespace="default" podName="busybox"
	Nov 22 00:36:25 old-k8s-version-187160 kubelet[1537]: I1122 00:36:25.347504    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8pc\" (UniqueName: \"kubernetes.io/projected/f6539f6c-3a59-4e72-b903-a218596cb332-kube-api-access-mb8pc\") pod \"busybox\" (UID: \"f6539f6c-3a59-4e72-b903-a218596cb332\") " pod="default/busybox"
	
	
	==> storage-provisioner [807c22f67261185cb1c38e8c47426e487d0218ab5042e337b6019698fe15e361] <==
	I1122 00:36:22.277527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:36:22.313230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:36:22.313281       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1122 00:36:22.344440       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:36:22.345711       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-187160_cf733f51-b72c-4ec1-9a6a-692835c1d302!
	I1122 00:36:22.353067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59bf7e8d-bfc7-4d7a-ba80-974e16cfaea6", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-187160_cf733f51-b72c-4ec1-9a6a-692835c1d302 became leader
	I1122 00:36:22.445864       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-187160_cf733f51-b72c-4ec1-9a6a-692835c1d302!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-187160 -n old-k8s-version-187160
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-187160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-187160
helpers_test.go:243: (dbg) docker inspect old-k8s-version-187160:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2",
	        "Created": "2025-11-22T00:35:31.769486385Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 204882,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:35:31.840763557Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2/hostname",
	        "HostsPath": "/var/lib/docker/containers/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2/hosts",
	        "LogPath": "/var/lib/docker/containers/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2/2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2-json.log",
	        "Name": "/old-k8s-version-187160",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-187160:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-187160",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2654618e6a6b4b33805be82437774286ea357c8daae2dd2810147786e98cfff2",
	                "LowerDir": "/var/lib/docker/overlay2/712cfcf4bdda99f3fb5a971120be94c4495350b452b5b4de871b8157be916fde-init/diff:/var/lib/docker/overlay2/7cce95e9587a813ce5f3ee5f28c6de3b78ed608010774b6d981aecaad739a571/diff",
	                "MergedDir": "/var/lib/docker/overlay2/712cfcf4bdda99f3fb5a971120be94c4495350b452b5b4de871b8157be916fde/merged",
	                "UpperDir": "/var/lib/docker/overlay2/712cfcf4bdda99f3fb5a971120be94c4495350b452b5b4de871b8157be916fde/diff",
	                "WorkDir": "/var/lib/docker/overlay2/712cfcf4bdda99f3fb5a971120be94c4495350b452b5b4de871b8157be916fde/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-187160",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-187160/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-187160",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-187160",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-187160",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "454be7a461b1abf0d012dc3454c28ec8e28206a70d925dc40668a3129b452d06",
	            "SandboxKey": "/var/run/docker/netns/454be7a461b1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-187160": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "6a:59:fc:b0:2c:bb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e57260d803e783f0c78a581231fefae2ba2ea5340f147424bbbb6f769732791",
	                    "EndpointID": "fbf38d70257f16dbf88c51dde15f08dd0cc2ad864def7c05de53ec66a3bd02e0",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-187160",
	                        "2654618e6a6b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-187160 -n old-k8s-version-187160
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-187160 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-187160 logs -n 25: (1.240303579s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬───────────
──────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼───────────
──────────┤
	│ ssh     │ -p cilium-482944 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                      │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                                │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo cri-dockerd --version                                                                                                                                                                                                         │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                           │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo systemctl cat containerd --no-pager                                                                                                                                                                                           │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                    │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo cat /etc/containerd/config.toml                                                                                                                                                                                               │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo containerd config dump                                                                                                                                                                                                        │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                                 │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo systemctl cat crio --no-pager                                                                                                                                                                                                 │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                       │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ ssh     │ -p cilium-482944 sudo crio config                                                                                                                                                                                                                   │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ delete  │ -p cilium-482944                                                                                                                                                                                                                                    │ cilium-482944             │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p force-systemd-env-115975 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-115975  │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-381698 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ start   │ -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-381698 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p kubernetes-upgrade-381698                                                                                                                                                                                                                        │ kubernetes-upgrade-381698 │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-285797    │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ force-systemd-env-115975 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-115975  │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p force-systemd-env-115975                                                                                                                                                                                                                         │ force-systemd-env-115975  │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-options-089440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-089440       │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ cert-options-089440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-089440       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ -p cert-options-089440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-089440       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ delete  │ -p cert-options-089440                                                                                                                                                                                                                              │ cert-options-089440       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160    │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴───────────
──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:35:25
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:35:25.495756  204491 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:35:25.496048  204491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:35:25.496081  204491 out.go:374] Setting ErrFile to fd 2...
	I1122 00:35:25.496105  204491 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:35:25.496419  204491 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:35:25.496952  204491 out.go:368] Setting JSON to false
	I1122 00:35:25.497971  204491 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4663,"bootTime":1763767063,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1122 00:35:25.498081  204491 start.go:143] virtualization:  
	I1122 00:35:25.501708  204491 out.go:179] * [old-k8s-version-187160] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:35:25.506412  204491 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:35:25.506475  204491 notify.go:221] Checking for updates...
	I1122 00:35:25.509912  204491 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:35:25.513456  204491 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:35:25.516592  204491 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1122 00:35:25.519636  204491 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:35:25.523211  204491 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:35:25.526641  204491 config.go:182] Loaded profile config "cert-expiration-285797": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:35:25.526787  204491 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:35:25.555005  204491 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:35:25.555124  204491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:35:25.622538  204491 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:35:25.61288294 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:35:25.622642  204491 docker.go:319] overlay module found
	I1122 00:35:25.626050  204491 out.go:179] * Using the docker driver based on user configuration
	I1122 00:35:25.629025  204491 start.go:309] selected driver: docker
	I1122 00:35:25.629048  204491 start.go:930] validating driver "docker" against <nil>
	I1122 00:35:25.629062  204491 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:35:25.629920  204491 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:35:25.686113  204491 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:35:25.676702728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:35:25.686272  204491 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:35:25.686516  204491 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:35:25.689987  204491 out.go:179] * Using Docker driver with root privileges
	I1122 00:35:25.693031  204491 cni.go:84] Creating CNI manager for ""
	I1122 00:35:25.693097  204491 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:35:25.693112  204491 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:35:25.693191  204491 start.go:353] cluster config:
	{Name:old-k8s-version-187160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:35:25.697920  204491 out.go:179] * Starting "old-k8s-version-187160" primary control-plane node in "old-k8s-version-187160" cluster
	I1122 00:35:25.700757  204491 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:35:25.703653  204491 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:35:25.706453  204491 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1122 00:35:25.706497  204491 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1122 00:35:25.706516  204491 cache.go:65] Caching tarball of preloaded images
	I1122 00:35:25.706539  204491 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:35:25.706598  204491 preload.go:238] Found /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1122 00:35:25.706609  204491 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1122 00:35:25.706742  204491 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/config.json ...
	I1122 00:35:25.706761  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/config.json: {Name:mk0e32effc08fb3e92cb6a10a0036ab11d9ac603 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:25.725937  204491 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:35:25.725959  204491 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:35:25.725973  204491 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:35:25.725996  204491 start.go:360] acquireMachinesLock for old-k8s-version-187160: {Name:mk9a2f2e89734c88923a7d9b9f969c1a6370f913 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:35:25.726113  204491 start.go:364] duration metric: took 96.034µs to acquireMachinesLock for "old-k8s-version-187160"
	I1122 00:35:25.726144  204491 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-187160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP:
APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:f
alse CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:35:25.726218  204491 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:35:25.729632  204491 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:35:25.729884  204491 start.go:159] libmachine.API.Create for "old-k8s-version-187160" (driver="docker")
	I1122 00:35:25.729925  204491 client.go:173] LocalClient.Create starting
	I1122 00:35:25.729999  204491 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem
	I1122 00:35:25.730048  204491 main.go:143] libmachine: Decoding PEM data...
	I1122 00:35:25.730067  204491 main.go:143] libmachine: Parsing certificate...
	I1122 00:35:25.730126  204491 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem
	I1122 00:35:25.730149  204491 main.go:143] libmachine: Decoding PEM data...
	I1122 00:35:25.730161  204491 main.go:143] libmachine: Parsing certificate...
	I1122 00:35:25.730541  204491 cli_runner.go:164] Run: docker network inspect old-k8s-version-187160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:35:25.747388  204491 cli_runner.go:211] docker network inspect old-k8s-version-187160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:35:25.747476  204491 network_create.go:284] running [docker network inspect old-k8s-version-187160] to gather additional debugging logs...
	I1122 00:35:25.747497  204491 cli_runner.go:164] Run: docker network inspect old-k8s-version-187160
	W1122 00:35:25.763840  204491 cli_runner.go:211] docker network inspect old-k8s-version-187160 returned with exit code 1
	I1122 00:35:25.763887  204491 network_create.go:287] error running [docker network inspect old-k8s-version-187160]: docker network inspect old-k8s-version-187160: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network old-k8s-version-187160 not found
	I1122 00:35:25.763900  204491 network_create.go:289] output of [docker network inspect old-k8s-version-187160]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network old-k8s-version-187160 not found
	
	** /stderr **
	I1122 00:35:25.764015  204491 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:35:25.780864  204491 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc891483483f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:f5:f5:5e:a2:12} reservation:<nil>}
	I1122 00:35:25.781241  204491 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dcada94e63da IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:bf:ad:c8:04:5e} reservation:<nil>}
	I1122 00:35:25.781562  204491 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7ab25f17f29c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:32:b1:2f:5f:ec} reservation:<nil>}
	I1122 00:35:25.781803  204491 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-6cb9cfba5857 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:ca:53:4c:fc:1c:97} reservation:<nil>}
	I1122 00:35:25.782260  204491 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40019c3a10}
	I1122 00:35:25.782291  204491 network_create.go:124] attempt to create docker network old-k8s-version-187160 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1122 00:35:25.782360  204491 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=old-k8s-version-187160 old-k8s-version-187160
	I1122 00:35:25.842320  204491 network_create.go:108] docker network old-k8s-version-187160 192.168.85.0/24 created
	I1122 00:35:25.842355  204491 kic.go:121] calculated static IP "192.168.85.2" for the "old-k8s-version-187160" container
	I1122 00:35:25.842428  204491 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:35:25.861485  204491 cli_runner.go:164] Run: docker volume create old-k8s-version-187160 --label name.minikube.sigs.k8s.io=old-k8s-version-187160 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:35:25.879635  204491 oci.go:103] Successfully created a docker volume old-k8s-version-187160
	I1122 00:35:25.879743  204491 cli_runner.go:164] Run: docker run --rm --name old-k8s-version-187160-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-187160 --entrypoint /usr/bin/test -v old-k8s-version-187160:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:35:26.435658  204491 oci.go:107] Successfully prepared a docker volume old-k8s-version-187160
	I1122 00:35:26.435728  204491 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1122 00:35:26.435747  204491 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:35:26.435823  204491 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-187160:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:35:31.696637  204491 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-187160:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (5.260759946s)
	I1122 00:35:31.696672  204491 kic.go:203] duration metric: took 5.260921646s to extract preloaded images to volume ...
	W1122 00:35:31.696817  204491 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:35:31.696931  204491 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:35:31.753972  204491 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-187160 --name old-k8s-version-187160 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-187160 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-187160 --network old-k8s-version-187160 --ip 192.168.85.2 --volume old-k8s-version-187160:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:35:32.113795  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Running}}
	I1122 00:35:32.148430  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:35:32.171721  204491 cli_runner.go:164] Run: docker exec old-k8s-version-187160 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:35:32.239681  204491 oci.go:144] the created container "old-k8s-version-187160" has a running status.
	I1122 00:35:32.239723  204491 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa...
	I1122 00:35:32.387029  204491 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:35:32.412458  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:35:32.436918  204491 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:35:32.436940  204491 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-187160 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:35:32.514133  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:35:32.533241  204491 machine.go:94] provisionDockerMachine start ...
	I1122 00:35:32.533325  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:32.550125  204491 main.go:143] libmachine: Using SSH client type: native
	I1122 00:35:32.550468  204491 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1122 00:35:32.550477  204491 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:35:32.551138  204491 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:35:35.695199  204491 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-187160
	
	I1122 00:35:35.695223  204491 ubuntu.go:182] provisioning hostname "old-k8s-version-187160"
	I1122 00:35:35.695289  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:35.713198  204491 main.go:143] libmachine: Using SSH client type: native
	I1122 00:35:35.713512  204491 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1122 00:35:35.713524  204491 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-187160 && echo "old-k8s-version-187160" | sudo tee /etc/hostname
	I1122 00:35:35.869138  204491 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-187160
	
	I1122 00:35:35.869219  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:35.895158  204491 main.go:143] libmachine: Using SSH client type: native
	I1122 00:35:35.895475  204491 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33053 <nil> <nil>}
	I1122 00:35:35.895503  204491 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-187160' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-187160/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-187160' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:35:36.040218  204491 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:35:36.040314  204491 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-2332/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-2332/.minikube}
	I1122 00:35:36.040363  204491 ubuntu.go:190] setting up certificates
	I1122 00:35:36.040400  204491 provision.go:84] configureAuth start
	I1122 00:35:36.040487  204491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187160
	I1122 00:35:36.058556  204491 provision.go:143] copyHostCerts
	I1122 00:35:36.058636  204491 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem, removing ...
	I1122 00:35:36.058646  204491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem
	I1122 00:35:36.058727  204491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem (1078 bytes)
	I1122 00:35:36.058834  204491 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem, removing ...
	I1122 00:35:36.058840  204491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem
	I1122 00:35:36.058868  204491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem (1123 bytes)
	I1122 00:35:36.058922  204491 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem, removing ...
	I1122 00:35:36.058927  204491 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem
	I1122 00:35:36.058953  204491 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem (1675 bytes)
	I1122 00:35:36.059005  204491 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-187160 san=[127.0.0.1 192.168.85.2 localhost minikube old-k8s-version-187160]
	I1122 00:35:36.348541  204491 provision.go:177] copyRemoteCerts
	I1122 00:35:36.348614  204491 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:35:36.348667  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.369601  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:36.471378  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1122 00:35:36.490458  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:35:36.508270  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:35:36.526938  204491 provision.go:87] duration metric: took 486.502541ms to configureAuth
	I1122 00:35:36.527021  204491 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:35:36.527237  204491 config.go:182] Loaded profile config "old-k8s-version-187160": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1122 00:35:36.527255  204491 machine.go:97] duration metric: took 3.993997448s to provisionDockerMachine
	I1122 00:35:36.527264  204491 client.go:176] duration metric: took 10.797328966s to LocalClient.Create
	I1122 00:35:36.527293  204491 start.go:167] duration metric: took 10.797411199s to libmachine.API.Create "old-k8s-version-187160"
	I1122 00:35:36.527306  204491 start.go:293] postStartSetup for "old-k8s-version-187160" (driver="docker")
	I1122 00:35:36.527316  204491 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:35:36.527381  204491 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:35:36.527430  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.545627  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:36.647923  204491 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:35:36.651471  204491 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:35:36.651502  204491 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:35:36.651514  204491 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/addons for local assets ...
	I1122 00:35:36.651600  204491 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/files for local assets ...
	I1122 00:35:36.651680  204491 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem -> 56232.pem in /etc/ssl/certs
	I1122 00:35:36.651791  204491 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:35:36.659452  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:35:36.677628  204491 start.go:296] duration metric: took 150.307453ms for postStartSetup
	I1122 00:35:36.678009  204491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187160
	I1122 00:35:36.695200  204491 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/config.json ...
	I1122 00:35:36.695488  204491 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:35:36.695533  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.712347  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:36.812625  204491 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:35:36.817876  204491 start.go:128] duration metric: took 11.091643714s to createHost
	I1122 00:35:36.817903  204491 start.go:83] releasing machines lock for "old-k8s-version-187160", held for 11.091773776s
	I1122 00:35:36.817972  204491 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-187160
	I1122 00:35:36.834925  204491 ssh_runner.go:195] Run: cat /version.json
	I1122 00:35:36.834937  204491 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:35:36.834978  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.834993  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:35:36.857313  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:36.858772  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:35:37.058580  204491 ssh_runner.go:195] Run: systemctl --version
	I1122 00:35:37.065466  204491 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:35:37.070084  204491 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:35:37.070153  204491 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:35:37.102719  204491 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:35:37.102749  204491 start.go:496] detecting cgroup driver to use...
	I1122 00:35:37.102783  204491 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:35:37.102844  204491 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:35:37.120639  204491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:35:37.134362  204491 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:35:37.134421  204491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:35:37.154417  204491 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:35:37.175282  204491 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:35:37.300828  204491 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:35:37.432936  204491 docker.go:234] disabling docker service ...
	I1122 00:35:37.433033  204491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:35:37.460467  204491 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:35:37.474617  204491 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:35:37.589678  204491 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:35:37.706875  204491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:35:37.721264  204491 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:35:37.736772  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I1122 00:35:37.745143  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:35:37.754554  204491 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1122 00:35:37.754677  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1122 00:35:37.764278  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:35:37.773387  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:35:37.782068  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:35:37.791172  204491 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:35:37.799462  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:35:37.809187  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:35:37.819591  204491 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:35:37.829972  204491 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:35:37.837818  204491 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:35:37.845123  204491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:35:37.972336  204491 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:35:38.106484  204491 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:35:38.106607  204491 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:35:38.110682  204491 start.go:564] Will wait 60s for crictl version
	I1122 00:35:38.110811  204491 ssh_runner.go:195] Run: which crictl
	I1122 00:35:38.114578  204491 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:35:38.143349  204491 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:35:38.143510  204491 ssh_runner.go:195] Run: containerd --version
	I1122 00:35:38.169250  204491 ssh_runner.go:195] Run: containerd --version
	I1122 00:35:38.195498  204491 out.go:179] * Preparing Kubernetes v1.28.0 on containerd 2.1.5 ...
	I1122 00:35:38.198689  204491 cli_runner.go:164] Run: docker network inspect old-k8s-version-187160 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:35:38.215899  204491 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:35:38.220352  204491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:35:38.230933  204491 kubeadm.go:884] updating cluster {Name:old-k8s-version-187160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cu
stomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:35:38.231054  204491 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1122 00:35:38.231125  204491 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:35:38.256948  204491 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:35:38.256971  204491 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:35:38.257034  204491 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:35:38.286540  204491 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:35:38.286568  204491 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:35:38.286577  204491 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 containerd true true} ...
	I1122 00:35:38.286682  204491 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-187160 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:35:38.286756  204491 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:35:38.313643  204491 cni.go:84] Creating CNI manager for ""
	I1122 00:35:38.313668  204491 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:35:38.313689  204491 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:35:38.313711  204491 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-187160 NodeName:old-k8s-version-187160 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:35:38.313845  204491 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "old-k8s-version-187160"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:35:38.313918  204491 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1122 00:35:38.322064  204491 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:35:38.322135  204491 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:35:38.330790  204491 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (326 bytes)
	I1122 00:35:38.348355  204491 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:35:38.363167  204491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2176 bytes)
	I1122 00:35:38.376556  204491 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:35:38.380569  204491 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:35:38.391056  204491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:35:38.526025  204491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:35:38.545711  204491 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160 for IP: 192.168.85.2
	I1122 00:35:38.545734  204491 certs.go:195] generating shared ca certs ...
	I1122 00:35:38.545751  204491 certs.go:227] acquiring lock for ca certs: {Name:mk348a892ec4309987f6c81ee1acef4884ca62db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:38.545938  204491 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key
	I1122 00:35:38.545988  204491 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key
	I1122 00:35:38.546003  204491 certs.go:257] generating profile certs ...
	I1122 00:35:38.546061  204491 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.key
	I1122 00:35:38.546079  204491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt with IP's: []
	I1122 00:35:38.803940  204491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt ...
	I1122 00:35:38.803973  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: {Name:mk95462d156959a6f9b819420692e4652b18d9e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:38.804179  204491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.key ...
	I1122 00:35:38.804193  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.key: {Name:mk162292d187fc773689134dda95a4cf7124ec7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:38.804293  204491 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key.1112e05c
	I1122 00:35:38.804315  204491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt.1112e05c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1122 00:35:39.153162  204491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt.1112e05c ...
	I1122 00:35:39.153195  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt.1112e05c: {Name:mka276df0b08ad9c5f26731f4b9f3e54b782777f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:39.153401  204491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key.1112e05c ...
	I1122 00:35:39.153416  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key.1112e05c: {Name:mk9054d49d862aec16e8a3fc2afe3d910c07fd2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:39.153506  204491 certs.go:382] copying /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt.1112e05c -> /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt
	I1122 00:35:39.153592  204491 certs.go:386] copying /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key.1112e05c -> /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key
	I1122 00:35:39.153655  204491 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.key
	I1122 00:35:39.153677  204491 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.crt with IP's: []
	I1122 00:35:39.563837  204491 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.crt ...
	I1122 00:35:39.563873  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.crt: {Name:mk9c0df39f4f112052b7beaf5fa971f1bf609226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:39.564069  204491 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.key ...
	I1122 00:35:39.564088  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.key: {Name:mk7e4d396efea8259e6e7217a8413e2f5c662eb5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:35:39.564286  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem (1338 bytes)
	W1122 00:35:39.564334  204491 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623_empty.pem, impossibly tiny 0 bytes
	I1122 00:35:39.564349  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:35:39.564376  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:35:39.564404  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:35:39.564428  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem (1675 bytes)
	I1122 00:35:39.564481  204491 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:35:39.565109  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:35:39.584667  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:35:39.603030  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:35:39.630817  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:35:39.649557  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1122 00:35:39.668869  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:35:39.688997  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:35:39.706802  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:35:39.724608  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem --> /usr/share/ca-certificates/5623.pem (1338 bytes)
	I1122 00:35:39.743500  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /usr/share/ca-certificates/56232.pem (1708 bytes)
	I1122 00:35:39.762127  204491 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:35:39.780891  204491 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:35:39.794217  204491 ssh_runner.go:195] Run: openssl version
	I1122 00:35:39.800660  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56232.pem && ln -fs /usr/share/ca-certificates/56232.pem /etc/ssl/certs/56232.pem"
	I1122 00:35:39.810417  204491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56232.pem
	I1122 00:35:39.814201  204491 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/56232.pem
	I1122 00:35:39.814271  204491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56232.pem
	I1122 00:35:39.857681  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56232.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:35:39.868835  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:35:39.877140  204491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:35:39.881319  204491 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:35:39.881438  204491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:35:39.930136  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:35:39.938568  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5623.pem && ln -fs /usr/share/ca-certificates/5623.pem /etc/ssl/certs/5623.pem"
	I1122 00:35:39.948496  204491 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5623.pem
	I1122 00:35:39.952490  204491 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/5623.pem
	I1122 00:35:39.952578  204491 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5623.pem
	I1122 00:35:39.994578  204491 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5623.pem /etc/ssl/certs/51391683.0"
	I1122 00:35:40.027794  204491 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:35:40.032651  204491 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:35:40.032738  204491 kubeadm.go:401] StartCluster: {Name:old-k8s-version-187160 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-187160 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:35:40.032812  204491 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:35:40.032877  204491 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:35:40.076761  204491 cri.go:89] found id: ""
	I1122 00:35:40.076845  204491 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:35:40.090718  204491 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:35:40.100082  204491 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:35:40.100159  204491 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:35:40.112504  204491 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:35:40.112529  204491 kubeadm.go:158] found existing configuration files:
	
	I1122 00:35:40.112593  204491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:35:40.124171  204491 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:35:40.124252  204491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:35:40.132777  204491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:35:40.143132  204491 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:35:40.143196  204491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:35:40.151096  204491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:35:40.159251  204491 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:35:40.159333  204491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:35:40.167381  204491 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:35:40.175820  204491 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:35:40.175919  204491 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:35:40.183634  204491 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:35:40.228995  204491 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1122 00:35:40.229354  204491 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:35:40.270119  204491 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:35:40.270223  204491 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:35:40.270286  204491 kubeadm.go:319] OS: Linux
	I1122 00:35:40.270358  204491 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:35:40.270431  204491 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:35:40.270506  204491 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:35:40.270583  204491 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:35:40.270653  204491 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:35:40.270725  204491 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:35:40.270800  204491 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:35:40.270872  204491 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:35:40.270941  204491 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:35:40.363742  204491 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:35:40.363899  204491 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:35:40.364030  204491 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1122 00:35:40.513095  204491 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:35:40.516289  204491 out.go:252]   - Generating certificates and keys ...
	I1122 00:35:40.516432  204491 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:35:40.516528  204491 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1122 00:35:40.711317  204491 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:35:40.978650  204491 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:35:41.429330  204491 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:35:41.658530  204491 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:35:41.984642  204491 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:35:41.985033  204491 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-187160] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:35:42.248353  204491 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:35:42.249067  204491 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-187160] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1122 00:35:42.615803  204491 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:35:43.335579  204491 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1122 00:35:43.815188  204491 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:35:43.815282  204491 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:35:44.402761  204491 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:35:44.753906  204491 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:35:45.167383  204491 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:35:45.452761  204491 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:35:45.453364  204491 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:35:45.456015  204491 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:35:45.459671  204491 out.go:252]   - Booting up control plane ...
	I1122 00:35:45.459771  204491 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:35:45.459848  204491 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:35:45.459914  204491 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:35:45.476046  204491 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:35:45.477240  204491 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:35:45.477361  204491 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:35:45.612547  204491 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1122 00:35:54.117999  204491 kubeadm.go:319] [apiclient] All control plane components are healthy after 8.505578 seconds
	I1122 00:35:54.118473  204491 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:35:54.139082  204491 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:35:54.670397  204491 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:35:54.670862  204491 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-187160 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:35:55.185028  204491 kubeadm.go:319] [bootstrap-token] Using token: to2lwb.i9cb3jhv4v448q3k
	I1122 00:35:55.188081  204491 out.go:252]   - Configuring RBAC rules ...
	I1122 00:35:55.188220  204491 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:35:55.195758  204491 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:35:55.205333  204491 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:35:55.209737  204491 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:35:55.214290  204491 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:35:55.218575  204491 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:35:55.236174  204491 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:35:55.499266  204491 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:35:55.616225  204491 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:35:55.617959  204491 kubeadm.go:319] 
	I1122 00:35:55.618033  204491 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:35:55.618040  204491 kubeadm.go:319] 
	I1122 00:35:55.618117  204491 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:35:55.618121  204491 kubeadm.go:319] 
	I1122 00:35:55.618145  204491 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:35:55.618693  204491 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:35:55.618750  204491 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:35:55.618755  204491 kubeadm.go:319] 
	I1122 00:35:55.618823  204491 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:35:55.618828  204491 kubeadm.go:319] 
	I1122 00:35:55.618876  204491 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:35:55.618880  204491 kubeadm.go:319] 
	I1122 00:35:55.618932  204491 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:35:55.619007  204491 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:35:55.619075  204491 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:35:55.619079  204491 kubeadm.go:319] 
	I1122 00:35:55.619393  204491 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:35:55.619477  204491 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:35:55.619482  204491 kubeadm.go:319] 
	I1122 00:35:55.619814  204491 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token to2lwb.i9cb3jhv4v448q3k \
	I1122 00:35:55.619924  204491 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad26553e08ef3801627a7166e0bb20bf24427585c6187a46d63e60c79d4d84c \
	I1122 00:35:55.620145  204491 kubeadm.go:319] 	--control-plane 
	I1122 00:35:55.620154  204491 kubeadm.go:319] 
	I1122 00:35:55.620453  204491 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:35:55.620509  204491 kubeadm.go:319] 
	I1122 00:35:55.620799  204491 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token to2lwb.i9cb3jhv4v448q3k \
	I1122 00:35:55.621154  204491 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad26553e08ef3801627a7166e0bb20bf24427585c6187a46d63e60c79d4d84c 
	I1122 00:35:55.628501  204491 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:35:55.628624  204491 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:35:55.628655  204491 cni.go:84] Creating CNI manager for ""
	I1122 00:35:55.628665  204491 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:35:55.632117  204491 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:35:55.635060  204491 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:35:55.639896  204491 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1122 00:35:55.639971  204491 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:35:55.669508  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:35:56.680357  204491 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.010766434s)
	I1122 00:35:56.680400  204491 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:35:56.680512  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:56.680587  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-187160 minikube.k8s.io/updated_at=2025_11_22T00_35_56_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=old-k8s-version-187160 minikube.k8s.io/primary=true
	I1122 00:35:56.912253  204491 ops.go:34] apiserver oom_adj: -16
	I1122 00:35:56.912369  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:57.412908  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:57.912481  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:58.412668  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:58.912654  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:59.413213  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:35:59.912565  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:00.412648  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:00.913361  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:01.412704  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:01.912815  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:02.412478  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:02.913389  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:03.413307  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:03.912649  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:04.413090  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:04.912826  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:05.413328  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:05.913130  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:06.412960  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:06.912510  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:07.412753  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:07.912460  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:08.412631  204491 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:36:08.527706  204491 kubeadm.go:1114] duration metric: took 11.847239896s to wait for elevateKubeSystemPrivileges
	I1122 00:36:08.527738  204491 kubeadm.go:403] duration metric: took 28.495021982s to StartCluster
	I1122 00:36:08.527756  204491 settings.go:142] acquiring lock: {Name:mk5b79634916fd13f05f4c848ff3e8b07cafa39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:36:08.527819  204491 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:36:08.528755  204491 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:36:08.528975  204491 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:36:08.529135  204491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:36:08.529411  204491 config.go:182] Loaded profile config "old-k8s-version-187160": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.28.0
	I1122 00:36:08.529451  204491 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:36:08.529508  204491 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-187160"
	I1122 00:36:08.529521  204491 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-187160"
	I1122 00:36:08.529542  204491 host.go:66] Checking if "old-k8s-version-187160" exists ...
	I1122 00:36:08.530290  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:36:08.530298  204491 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-187160"
	I1122 00:36:08.530316  204491 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-187160"
	I1122 00:36:08.530614  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:36:08.532233  204491 out.go:179] * Verifying Kubernetes components...
	I1122 00:36:08.535145  204491 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:36:08.586966  204491 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:36:08.587190  204491 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-187160"
	I1122 00:36:08.587223  204491 host.go:66] Checking if "old-k8s-version-187160" exists ...
	I1122 00:36:08.587683  204491 cli_runner.go:164] Run: docker container inspect old-k8s-version-187160 --format={{.State.Status}}
	I1122 00:36:08.591301  204491 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:36:08.591402  204491 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:36:08.591477  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:36:08.635204  204491 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:36:08.635226  204491 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:36:08.635293  204491 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-187160
	I1122 00:36:08.651636  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:36:08.678324  204491 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33053 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/old-k8s-version-187160/id_rsa Username:docker}
	I1122 00:36:08.944863  204491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:36:08.950679  204491 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:36:08.950805  204491 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:36:08.978464  204491 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:36:09.986874  204491 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.036045446s)
	I1122 00:36:09.987832  204491 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-187160" to be "Ready" ...
	I1122 00:36:09.988180  204491 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.037475899s)
	I1122 00:36:09.988235  204491 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:36:10.240303  204491 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.261804778s)
	I1122 00:36:10.243645  204491 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1122 00:36:10.246631  204491 addons.go:530] duration metric: took 1.717174385s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1122 00:36:10.493160  204491 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-187160" context rescaled to 1 replicas
	W1122 00:36:11.991518  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	W1122 00:36:14.491462  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	W1122 00:36:16.991034  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	W1122 00:36:18.991616  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	W1122 00:36:21.491517  204491 node_ready.go:57] node "old-k8s-version-187160" has "Ready":"False" status (will retry)
	I1122 00:36:21.991333  204491 node_ready.go:49] node "old-k8s-version-187160" is "Ready"
	I1122 00:36:21.991365  204491 node_ready.go:38] duration metric: took 12.003472888s for node "old-k8s-version-187160" to be "Ready" ...
	I1122 00:36:21.991381  204491 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:36:21.991444  204491 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:36:22.006058  204491 api_server.go:72] duration metric: took 13.477048085s to wait for apiserver process to appear ...
	I1122 00:36:22.006087  204491 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:36:22.006108  204491 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:36:22.016962  204491 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:36:22.018742  204491 api_server.go:141] control plane version: v1.28.0
	I1122 00:36:22.018771  204491 api_server.go:131] duration metric: took 12.676666ms to wait for apiserver health ...
	I1122 00:36:22.018781  204491 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:36:22.024300  204491 system_pods.go:59] 8 kube-system pods found
	I1122 00:36:22.024339  204491 system_pods.go:61] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.024346  204491 system_pods.go:61] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.024352  204491 system_pods.go:61] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.024356  204491 system_pods.go:61] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.024364  204491 system_pods.go:61] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.024368  204491 system_pods.go:61] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.024372  204491 system_pods.go:61] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.024377  204491 system_pods.go:61] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.024383  204491 system_pods.go:74] duration metric: took 5.596781ms to wait for pod list to return data ...
	I1122 00:36:22.024392  204491 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:36:22.027545  204491 default_sa.go:45] found service account: "default"
	I1122 00:36:22.027626  204491 default_sa.go:55] duration metric: took 3.227688ms for default service account to be created ...
	I1122 00:36:22.027638  204491 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:36:22.032182  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:22.032219  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.032226  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.032233  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.032237  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.032242  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.032246  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.032250  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.032258  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.032287  204491 retry.go:31] will retry after 227.562193ms: missing components: kube-dns
	I1122 00:36:22.265424  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:22.265462  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.265472  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.265478  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.265483  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.265489  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.265493  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.265497  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.265504  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.265524  204491 retry.go:31] will retry after 240.91922ms: missing components: kube-dns
	I1122 00:36:22.510867  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:22.510912  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.510921  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.510927  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.510933  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.510938  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.510941  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.510946  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.510951  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.510973  204491 retry.go:31] will retry after 348.682328ms: missing components: kube-dns
	I1122 00:36:22.864222  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:22.864266  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:36:22.864274  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:22.864281  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:22.864286  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:22.864291  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:22.864295  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:22.864300  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:22.864306  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:36:22.864321  204491 retry.go:31] will retry after 425.9451ms: missing components: kube-dns
	I1122 00:36:23.295106  204491 system_pods.go:86] 8 kube-system pods found
	I1122 00:36:23.295135  204491 system_pods.go:89] "coredns-5dd5756b68-mrsrv" [98b86160-bc56-4571-a3ac-ebfd93eda042] Running
	I1122 00:36:23.295143  204491 system_pods.go:89] "etcd-old-k8s-version-187160" [cfb83d05-77be-4f4c-9158-24611f449c9c] Running
	I1122 00:36:23.295148  204491 system_pods.go:89] "kindnet-lprzz" [5aba37af-f297-48d8-bc0b-d368ae96d525] Running
	I1122 00:36:23.295152  204491 system_pods.go:89] "kube-apiserver-old-k8s-version-187160" [4d4b0345-f2cd-4f5f-8fd6-57ef5e247b2c] Running
	I1122 00:36:23.295157  204491 system_pods.go:89] "kube-controller-manager-old-k8s-version-187160" [1be7f7fd-2708-4f93-860c-815b1168878b] Running
	I1122 00:36:23.295161  204491 system_pods.go:89] "kube-proxy-bmr5t" [dffeabf6-7d14-473d-a908-1995469b8249] Running
	I1122 00:36:23.295165  204491 system_pods.go:89] "kube-scheduler-old-k8s-version-187160" [36e02460-d90f-4b42-bfec-85bcc45e0a95] Running
	I1122 00:36:23.295169  204491 system_pods.go:89] "storage-provisioner" [a3bd5eb1-a002-4b61-8bd6-5caabe4bf543] Running
	I1122 00:36:23.295176  204491 system_pods.go:126] duration metric: took 1.267532498s to wait for k8s-apps to be running ...
	I1122 00:36:23.295183  204491 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:36:23.295236  204491 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:36:23.308784  204491 system_svc.go:56] duration metric: took 13.592552ms WaitForService to wait for kubelet
	I1122 00:36:23.308825  204491 kubeadm.go:587] duration metric: took 14.779827699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:36:23.308852  204491 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:36:23.312011  204491 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:36:23.312046  204491 node_conditions.go:123] node cpu capacity is 2
	I1122 00:36:23.312059  204491 node_conditions.go:105] duration metric: took 3.201235ms to run NodePressure ...
	I1122 00:36:23.312071  204491 start.go:242] waiting for startup goroutines ...
	I1122 00:36:23.312086  204491 start.go:247] waiting for cluster config update ...
	I1122 00:36:23.312101  204491 start.go:256] writing updated cluster config ...
	I1122 00:36:23.312384  204491 ssh_runner.go:195] Run: rm -f paused
	I1122 00:36:23.316108  204491 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:36:23.320553  204491 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-mrsrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.325648  204491 pod_ready.go:94] pod "coredns-5dd5756b68-mrsrv" is "Ready"
	I1122 00:36:23.325674  204491 pod_ready.go:86] duration metric: took 5.095957ms for pod "coredns-5dd5756b68-mrsrv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.328915  204491 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.334037  204491 pod_ready.go:94] pod "etcd-old-k8s-version-187160" is "Ready"
	I1122 00:36:23.334067  204491 pod_ready.go:86] duration metric: took 5.116954ms for pod "etcd-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.337625  204491 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.356839  204491 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-187160" is "Ready"
	I1122 00:36:23.356876  204491 pod_ready.go:86] duration metric: took 19.22459ms for pod "kube-apiserver-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.361446  204491 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.720938  204491 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-187160" is "Ready"
	I1122 00:36:23.720968  204491 pod_ready.go:86] duration metric: took 359.490555ms for pod "kube-controller-manager-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:23.921107  204491 pod_ready.go:83] waiting for pod "kube-proxy-bmr5t" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:24.320758  204491 pod_ready.go:94] pod "kube-proxy-bmr5t" is "Ready"
	I1122 00:36:24.320787  204491 pod_ready.go:86] duration metric: took 399.655664ms for pod "kube-proxy-bmr5t" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:24.520579  204491 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:24.920820  204491 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-187160" is "Ready"
	I1122 00:36:24.920848  204491 pod_ready.go:86] duration metric: took 400.240328ms for pod "kube-scheduler-old-k8s-version-187160" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:36:24.920861  204491 pod_ready.go:40] duration metric: took 1.604716609s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:36:24.979879  204491 start.go:628] kubectl: 1.33.2, cluster: 1.28.0 (minor skew: 5)
	I1122 00:36:24.983429  204491 out.go:203] 
	W1122 00:36:24.986583  204491 out.go:285] ! /usr/local/bin/kubectl is version 1.33.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1122 00:36:24.995477  204491 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1122 00:36:24.998398  204491 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-187160" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	494bc3382cb83       1611cd07b61d5       10 seconds ago      Running             busybox                   0                   d35a249623dcf       busybox                                          default
	654eec2541e67       97e04611ad434       16 seconds ago      Running             coredns                   0                   8a99471b5af37       coredns-5dd5756b68-mrsrv                         kube-system
	807c22f672611       ba04bb24b9575       16 seconds ago      Running             storage-provisioner       0                   9cb1da3721961       storage-provisioner                              kube-system
	8cdc46abde6ba       b1a8c6f707935       27 seconds ago      Running             kindnet-cni               0                   692b4826b8541       kindnet-lprzz                                    kube-system
	8e33bd8eeab59       940f54a5bcae9       29 seconds ago      Running             kube-proxy                0                   14060723f1c30       kube-proxy-bmr5t                                 kube-system
	f4bd605783e20       762dce4090c5f       51 seconds ago      Running             kube-scheduler            0                   7720a7ec0099e       kube-scheduler-old-k8s-version-187160            kube-system
	422173de99e2a       46cc66ccc7c19       51 seconds ago      Running             kube-controller-manager   0                   4956ac9800f65       kube-controller-manager-old-k8s-version-187160   kube-system
	4d2a9fc38adb0       9cdd6470f48c8       51 seconds ago      Running             etcd                      0                   5bbc7a9687c9e       etcd-old-k8s-version-187160                      kube-system
	8e8018cdd5ebc       00543d2fe5d71       51 seconds ago      Running             kube-apiserver            0                   51821586bd991       kube-apiserver-old-k8s-version-187160            kube-system
	
	
	==> containerd <==
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.190949694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-mrsrv,Uid:98b86160-bc56-4571-a3ac-ebfd93eda042,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a99471b5af37996285fcf9181e4881d81926045a25ef6ae4127c3af1567110b\""
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.200275758Z" level=info msg="CreateContainer within sandbox \"8a99471b5af37996285fcf9181e4881d81926045a25ef6ae4127c3af1567110b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.218592273Z" level=info msg="Container 654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.229282499Z" level=info msg="CreateContainer within sandbox \"8a99471b5af37996285fcf9181e4881d81926045a25ef6ae4127c3af1567110b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222\""
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.230197671Z" level=info msg="StartContainer for \"654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222\""
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.231322709Z" level=info msg="connecting to shim 654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222" address="unix:///run/containerd/s/7e8e622c8ba08e7a15e3b2eb24a2e7f882c657cd7f8e507b49e75e8c8b234d1a" protocol=ttrpc version=3
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.288656314Z" level=info msg="StartContainer for \"807c22f67261185cb1c38e8c47426e487d0218ab5042e337b6019698fe15e361\" returns successfully"
	Nov 22 00:36:22 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:22.360970347Z" level=info msg="StartContainer for \"654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222\" returns successfully"
	Nov 22 00:36:25 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:25.549036859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f6539f6c-3a59-4e72-b903-a218596cb332,Namespace:default,Attempt:0,}"
	Nov 22 00:36:25 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:25.612228523Z" level=info msg="connecting to shim d35a249623dcf4705609d48e3a1ffdb56d8625cae031e61983391e525f34d081" address="unix:///run/containerd/s/12cff00c35647593d05548c7fa195d2bdf00b8303d1fe1bb1c09dbc3effac604" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:36:25 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:25.685871560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:f6539f6c-3a59-4e72-b903-a218596cb332,Namespace:default,Attempt:0,} returns sandbox id \"d35a249623dcf4705609d48e3a1ffdb56d8625cae031e61983391e525f34d081\""
	Nov 22 00:36:25 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:25.688018486Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.826931948Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.828940345Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937185"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.831283919Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.834691884Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.835352013Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.147291007s"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.835469963Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.837431599Z" level=info msg="CreateContainer within sandbox \"d35a249623dcf4705609d48e3a1ffdb56d8625cae031e61983391e525f34d081\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.857207130Z" level=info msg="Container 494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.871093618Z" level=info msg="CreateContainer within sandbox \"d35a249623dcf4705609d48e3a1ffdb56d8625cae031e61983391e525f34d081\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f\""
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.872286652Z" level=info msg="StartContainer for \"494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f\""
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.873692350Z" level=info msg="connecting to shim 494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f" address="unix:///run/containerd/s/12cff00c35647593d05548c7fa195d2bdf00b8303d1fe1bb1c09dbc3effac604" protocol=ttrpc version=3
	Nov 22 00:36:27 old-k8s-version-187160 containerd[761]: time="2025-11-22T00:36:27.950649385Z" level=info msg="StartContainer for \"494bc3382cb83736807dcb36ea6944784af33e33f94882808930795f27388c7f\" returns successfully"
	Nov 22 00:36:35 old-k8s-version-187160 containerd[761]: E1122 00:36:35.395678     761 websocket.go:100] "Unhandled Error" err="unable to upgrade websocket connection: websocket server finished before becoming ready" logger="UnhandledError"
	
	
	==> coredns [654eec2541e67038b677e1627c9fcebe816bdb376138ec956e30db4a5ed16222] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:43688 - 23032 "HINFO IN 3717724876547105178.1036559960491691526. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021196589s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-187160
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-187160
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=old-k8s-version-187160
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_35_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:35:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-187160
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:36:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:36:26 +0000   Sat, 22 Nov 2025 00:35:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:36:26 +0000   Sat, 22 Nov 2025 00:35:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:36:26 +0000   Sat, 22 Nov 2025 00:35:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:36:26 +0000   Sat, 22 Nov 2025 00:36:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-187160
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                a6732ccb-f376-4b40-84a6-bd1e3603acd7
	  Boot ID:                    4e86741a-5896-4eb6-97ce-70ea8beedc67
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-mrsrv                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-old-k8s-version-187160                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         43s
	  kube-system                 kindnet-lprzz                                     100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-187160             250m (12%)    0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 kube-controller-manager-old-k8s-version-187160    200m (10%)    0 (0%)      0 (0%)           0 (0%)         45s
	  kube-system                 kube-proxy-bmr5t                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-187160             100m (5%)     0 (0%)      0 (0%)           0 (0%)         43s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29s                kube-proxy       
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node old-k8s-version-187160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node old-k8s-version-187160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x7 over 52s)  kubelet          Node old-k8s-version-187160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  52s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s                kubelet          Node old-k8s-version-187160 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s                kubelet          Node old-k8s-version-187160 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s                kubelet          Node old-k8s-version-187160 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  43s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           30s                node-controller  Node old-k8s-version-187160 event: Registered Node old-k8s-version-187160 in Controller
	  Normal  NodeReady                17s                kubelet          Node old-k8s-version-187160 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.017121] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498034] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037542] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808656] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.648915] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 23:58] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=00000000f9ea0775{9P.session} n=0000000035823f74
	[  +0.001177] FS-Cache: O-key=[10] '34323935353131333738'
	[  +0.000819] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000f9ea0775{9P.session} n=00000000dbfd8515
	[  +0.001154] FS-Cache: N-key=[10] '34323935353131333738'
	[Nov22 00:00] hrtimer: interrupt took 9958927 ns
	
	
	==> etcd [4d2a9fc38adb0b31ca51bf2e68e8a59fe482fcd2b93af068c1db236c50e65e57] <==
	{"level":"info","ts":"2025-11-22T00:35:47.864157Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-22T00:35:47.869484Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-11-22T00:35:47.873465Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-11-22T00:35:47.869589Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:35:47.87352Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-11-22T00:35:47.873722Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-22T00:35:47.873744Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-22T00:35:48.530385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 1"}
	{"level":"info","ts":"2025-11-22T00:35:48.530665Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 1"}
	{"level":"info","ts":"2025-11-22T00:35:48.530804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 1"}
	{"level":"info","ts":"2025-11-22T00:35:48.530891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-11-22T00:35:48.530969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:35:48.531058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-11-22T00:35:48.531128Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-11-22T00:35:48.532649Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:35:48.533919Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-187160 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-22T00:35:48.534231Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:35:48.534769Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:35:48.536025Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:35:48.536154Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-22T00:35:48.536275Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-22T00:35:48.539923Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-22T00:35:48.540183Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-22T00:35:48.540261Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-22T00:35:48.557717Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	
	
	==> kernel <==
	 00:36:39 up  1:18,  0 user,  load average: 3.54, 3.79, 2.85
	Linux old-k8s-version-187160 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8cdc46abde6bad9104481ebcd97fd8584433d6596a115bb7ac80832f48229c0d] <==
	I1122 00:36:11.316025       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:36:11.316279       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:36:11.316431       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:36:11.316442       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:36:11.316456       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:36:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:36:11.612539       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:36:11.612562       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:36:11.612570       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:36:11.613331       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:36:11.813277       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:36:11.813307       1 metrics.go:72] Registering metrics
	I1122 00:36:11.813408       1 controller.go:711] "Syncing nftables rules"
	I1122 00:36:21.620218       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:36:21.620282       1 main.go:301] handling current node
	I1122 00:36:31.613593       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:36:31.613631       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8e8018cdd5ebcb3cd027b426852cef360ea1f4e64ace348b361ff36ccd368012] <==
	I1122 00:35:52.259918       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1122 00:35:52.260106       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1122 00:35:52.260569       1 shared_informer.go:318] Caches are synced for configmaps
	I1122 00:35:52.261451       1 controller.go:624] quota admission added evaluator for: namespaces
	I1122 00:35:52.266423       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1122 00:35:52.266592       1 aggregator.go:166] initial CRD sync complete...
	I1122 00:35:52.266734       1 autoregister_controller.go:141] Starting autoregister controller
	I1122 00:35:52.266876       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:35:52.266967       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:35:52.293066       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:35:52.978061       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:35:52.986815       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:35:52.986841       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:35:53.822664       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:35:53.879991       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:35:53.983791       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:35:53.991186       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:35:53.992448       1 controller.go:624] quota admission added evaluator for: endpoints
	I1122 00:35:53.999040       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:35:54.214195       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1122 00:35:55.482369       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1122 00:35:55.497744       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:35:55.520177       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1122 00:36:08.135231       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:36:08.187027       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [422173de99e2aced5a73d55aeb7e56b0728bc3681c11733622c38f6d0425ecd3] <==
	I1122 00:36:08.131017       1 shared_informer.go:318] Caches are synced for job
	I1122 00:36:08.147818       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:36:08.172229       1 shared_informer.go:318] Caches are synced for resource quota
	I1122 00:36:08.178101       1 shared_informer.go:318] Caches are synced for cronjob
	I1122 00:36:08.190358       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-lprzz"
	I1122 00:36:08.221984       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-bmr5t"
	I1122 00:36:08.232130       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1122 00:36:08.315039       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-p9rvg"
	I1122 00:36:08.340943       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mrsrv"
	I1122 00:36:08.369701       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="156.223576ms"
	I1122 00:36:08.379883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.079149ms"
	I1122 00:36:08.380490       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.036µs"
	I1122 00:36:08.627042       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:36:08.627075       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1122 00:36:08.627148       1 shared_informer.go:318] Caches are synced for garbage collector
	I1122 00:36:10.024745       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1122 00:36:10.054530       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-p9rvg"
	I1122 00:36:10.066916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="42.969237ms"
	I1122 00:36:10.077020       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.054493ms"
	I1122 00:36:10.077099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="43.464µs"
	I1122 00:36:21.693144       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="102.262µs"
	I1122 00:36:21.721559       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.552µs"
	I1122 00:36:22.906222       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="15.230605ms"
	I1122 00:36:22.906582       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="90.003µs"
	I1122 00:36:23.031191       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [8e33bd8eeab59d54aa5b42af6c70b616bdbf7c411e21a37367f8687511b9cbf6] <==
	I1122 00:36:09.223209       1 server_others.go:69] "Using iptables proxy"
	I1122 00:36:09.244053       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I1122 00:36:09.307007       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:36:09.309152       1 server_others.go:152] "Using iptables Proxier"
	I1122 00:36:09.309196       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1122 00:36:09.309205       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1122 00:36:09.309229       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1122 00:36:09.309429       1 server.go:846] "Version info" version="v1.28.0"
	I1122 00:36:09.309444       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:36:09.310333       1 config.go:188] "Starting service config controller"
	I1122 00:36:09.310356       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1122 00:36:09.310374       1 config.go:97] "Starting endpoint slice config controller"
	I1122 00:36:09.310377       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1122 00:36:09.313015       1 config.go:315] "Starting node config controller"
	I1122 00:36:09.313030       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1122 00:36:09.411178       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1122 00:36:09.411227       1 shared_informer.go:318] Caches are synced for service config
	I1122 00:36:09.413934       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [f4bd605783e2022ce18c699001040087e5badf067b4e0004a50ec4c353329100] <==
	I1122 00:35:51.229184       1 serving.go:348] Generated self-signed cert in-memory
	I1122 00:35:54.167904       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1122 00:35:54.168129       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:35:54.173851       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1122 00:35:54.173999       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1122 00:35:54.174229       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1122 00:35:54.174045       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:35:54.181808       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1122 00:35:54.174056       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:35:54.183738       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1122 00:35:54.174069       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1122 00:35:54.274685       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1122 00:35:54.284155       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1122 00:35:54.285311       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.146939    1537 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.149363    1537 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.250067    1537 topology_manager.go:215] "Topology Admit Handler" podUID="5aba37af-f297-48d8-bc0b-d368ae96d525" podNamespace="kube-system" podName="kindnet-lprzz"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.256574    1537 topology_manager.go:215] "Topology Admit Handler" podUID="dffeabf6-7d14-473d-a908-1995469b8249" podNamespace="kube-system" podName="kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318270    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4tdg\" (UniqueName: \"kubernetes.io/projected/dffeabf6-7d14-473d-a908-1995469b8249-kube-api-access-r4tdg\") pod \"kube-proxy-bmr5t\" (UID: \"dffeabf6-7d14-473d-a908-1995469b8249\") " pod="kube-system/kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318333    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5aba37af-f297-48d8-bc0b-d368ae96d525-cni-cfg\") pod \"kindnet-lprzz\" (UID: \"5aba37af-f297-48d8-bc0b-d368ae96d525\") " pod="kube-system/kindnet-lprzz"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318362    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dffeabf6-7d14-473d-a908-1995469b8249-lib-modules\") pod \"kube-proxy-bmr5t\" (UID: \"dffeabf6-7d14-473d-a908-1995469b8249\") " pod="kube-system/kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318389    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5aba37af-f297-48d8-bc0b-d368ae96d525-lib-modules\") pod \"kindnet-lprzz\" (UID: \"5aba37af-f297-48d8-bc0b-d368ae96d525\") " pod="kube-system/kindnet-lprzz"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318411    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/dffeabf6-7d14-473d-a908-1995469b8249-kube-proxy\") pod \"kube-proxy-bmr5t\" (UID: \"dffeabf6-7d14-473d-a908-1995469b8249\") " pod="kube-system/kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318434    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dffeabf6-7d14-473d-a908-1995469b8249-xtables-lock\") pod \"kube-proxy-bmr5t\" (UID: \"dffeabf6-7d14-473d-a908-1995469b8249\") " pod="kube-system/kube-proxy-bmr5t"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318457    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5aba37af-f297-48d8-bc0b-d368ae96d525-xtables-lock\") pod \"kindnet-lprzz\" (UID: \"5aba37af-f297-48d8-bc0b-d368ae96d525\") " pod="kube-system/kindnet-lprzz"
	Nov 22 00:36:08 old-k8s-version-187160 kubelet[1537]: I1122 00:36:08.318481    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wksn9\" (UniqueName: \"kubernetes.io/projected/5aba37af-f297-48d8-bc0b-d368ae96d525-kube-api-access-wksn9\") pod \"kindnet-lprzz\" (UID: \"5aba37af-f297-48d8-bc0b-d368ae96d525\") " pod="kube-system/kindnet-lprzz"
	Nov 22 00:36:09 old-k8s-version-187160 kubelet[1537]: I1122 00:36:09.837710    1537 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-bmr5t" podStartSLOduration=1.8375463779999999 podCreationTimestamp="2025-11-22 00:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:36:09.837193356 +0000 UTC m=+14.395591777" watchObservedRunningTime="2025-11-22 00:36:09.837546378 +0000 UTC m=+14.395944807"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.636202    1537 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.673663    1537 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-lprzz" podStartSLOduration=11.549749383 podCreationTimestamp="2025-11-22 00:36:08 +0000 UTC" firstStartedPulling="2025-11-22 00:36:08.937197424 +0000 UTC m=+13.495595845" lastFinishedPulling="2025-11-22 00:36:11.061029686 +0000 UTC m=+15.619428107" observedRunningTime="2025-11-22 00:36:11.843836409 +0000 UTC m=+16.402234830" watchObservedRunningTime="2025-11-22 00:36:21.673581645 +0000 UTC m=+26.231980074"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.674153    1537 topology_manager.go:215] "Topology Admit Handler" podUID="a3bd5eb1-a002-4b61-8bd6-5caabe4bf543" podNamespace="kube-system" podName="storage-provisioner"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.682473    1537 topology_manager.go:215] "Topology Admit Handler" podUID="98b86160-bc56-4571-a3ac-ebfd93eda042" podNamespace="kube-system" podName="coredns-5dd5756b68-mrsrv"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.718520    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7j69\" (UniqueName: \"kubernetes.io/projected/a3bd5eb1-a002-4b61-8bd6-5caabe4bf543-kube-api-access-v7j69\") pod \"storage-provisioner\" (UID: \"a3bd5eb1-a002-4b61-8bd6-5caabe4bf543\") " pod="kube-system/storage-provisioner"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.718757    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a3bd5eb1-a002-4b61-8bd6-5caabe4bf543-tmp\") pod \"storage-provisioner\" (UID: \"a3bd5eb1-a002-4b61-8bd6-5caabe4bf543\") " pod="kube-system/storage-provisioner"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.718938    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rbh7\" (UniqueName: \"kubernetes.io/projected/98b86160-bc56-4571-a3ac-ebfd93eda042-kube-api-access-9rbh7\") pod \"coredns-5dd5756b68-mrsrv\" (UID: \"98b86160-bc56-4571-a3ac-ebfd93eda042\") " pod="kube-system/coredns-5dd5756b68-mrsrv"
	Nov 22 00:36:21 old-k8s-version-187160 kubelet[1537]: I1122 00:36:21.719128    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98b86160-bc56-4571-a3ac-ebfd93eda042-config-volume\") pod \"coredns-5dd5756b68-mrsrv\" (UID: \"98b86160-bc56-4571-a3ac-ebfd93eda042\") " pod="kube-system/coredns-5dd5756b68-mrsrv"
	Nov 22 00:36:22 old-k8s-version-187160 kubelet[1537]: I1122 00:36:22.870118    1537 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.870074673 podCreationTimestamp="2025-11-22 00:36:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:36:22.869831076 +0000 UTC m=+27.428229505" watchObservedRunningTime="2025-11-22 00:36:22.870074673 +0000 UTC m=+27.428473102"
	Nov 22 00:36:25 old-k8s-version-187160 kubelet[1537]: I1122 00:36:25.241313    1537 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-mrsrv" podStartSLOduration=17.241260829 podCreationTimestamp="2025-11-22 00:36:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:36:22.890080452 +0000 UTC m=+27.448478873" watchObservedRunningTime="2025-11-22 00:36:25.241260829 +0000 UTC m=+29.799659258"
	Nov 22 00:36:25 old-k8s-version-187160 kubelet[1537]: I1122 00:36:25.242395    1537 topology_manager.go:215] "Topology Admit Handler" podUID="f6539f6c-3a59-4e72-b903-a218596cb332" podNamespace="default" podName="busybox"
	Nov 22 00:36:25 old-k8s-version-187160 kubelet[1537]: I1122 00:36:25.347504    1537 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb8pc\" (UniqueName: \"kubernetes.io/projected/f6539f6c-3a59-4e72-b903-a218596cb332-kube-api-access-mb8pc\") pod \"busybox\" (UID: \"f6539f6c-3a59-4e72-b903-a218596cb332\") " pod="default/busybox"
	
	
	==> storage-provisioner [807c22f67261185cb1c38e8c47426e487d0218ab5042e337b6019698fe15e361] <==
	I1122 00:36:22.277527       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:36:22.313230       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:36:22.313281       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1122 00:36:22.344440       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:36:22.345711       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-187160_cf733f51-b72c-4ec1-9a6a-692835c1d302!
	I1122 00:36:22.353067       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"59bf7e8d-bfc7-4d7a-ba80-974e16cfaea6", APIVersion:"v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-187160_cf733f51-b72c-4ec1-9a6a-692835c1d302 became leader
	I1122 00:36:22.445864       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-187160_cf733f51-b72c-4ec1-9a6a-692835c1d302!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-187160 -n old-k8s-version-187160
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-187160 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (14.85s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-080784 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [2004090a-bf01-4959-8a39-43712a0513ef] Pending
helpers_test.go:352: "busybox" [2004090a-bf01-4959-8a39-43712a0513ef] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [2004090a-bf01-4959-8a39-43712a0513ef] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003993887s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-080784 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-080784
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-080784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38",
	        "Created": "2025-11-22T00:37:47.326721111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213431,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:37:47.405552076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38/hosts",
	        "LogPath": "/var/lib/docker/containers/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38-json.log",
	        "Name": "/default-k8s-diff-port-080784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-080784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-080784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38",
	                "LowerDir": "/var/lib/docker/overlay2/90d75a3a12f9b620c4ff64f5acb73959349482996193aae272e4736aa79307da-init/diff:/var/lib/docker/overlay2/7cce95e9587a813ce5f3ee5f28c6de3b78ed608010774b6d981aecaad739a571/diff",
	                "MergedDir": "/var/lib/docker/overlay2/90d75a3a12f9b620c4ff64f5acb73959349482996193aae272e4736aa79307da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/90d75a3a12f9b620c4ff64f5acb73959349482996193aae272e4736aa79307da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/90d75a3a12f9b620c4ff64f5acb73959349482996193aae272e4736aa79307da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-080784",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-080784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-080784",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-080784",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-080784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d0b4b9a9685daa50934a6cdbf7e954d3579b493735b3e580febbc2d178d98586",
	            "SandboxKey": "/var/run/docker/netns/d0b4b9a9685d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-080784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:d8:a7:6a:56:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "791319b2b020217842d6d72bba721e8e9b81db7f24032687c53843e39473054c",
	                    "EndpointID": "ac053bd91ee2ee45e7f9fdad2f1462d803b9b1aa2c2d598764d8db0a32b6f2c2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-080784",
	                        "ac2a6eee5f6b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-080784 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-080784 logs -n 25: (1.259549088s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p force-systemd-env-115975 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-381698    │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ start   │ -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-381698    │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p kubernetes-upgrade-381698                                                                                                                                                                                                                        │ kubernetes-upgrade-381698    │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ force-systemd-env-115975 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p force-systemd-env-115975                                                                                                                                                                                                                         │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-options-089440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ cert-options-089440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ -p cert-options-089440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ delete  │ -p cert-options-089440                                                                                                                                                                                                                              │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-187160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ stop    │ -p old-k8s-version-187160 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-187160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:37 UTC │
	│ image   │ old-k8s-version-187160 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ pause   │ -p old-k8s-version-187160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ unpause │ -p old-k8s-version-187160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ delete  │ -p old-k8s-version-187160                                                                                                                                                                                                                           │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ delete  │ -p old-k8s-version-187160                                                                                                                                                                                                                           │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ start   │ -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:39 UTC │
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:38 UTC │
	│ delete  │ -p cert-expiration-285797                                                                                                                                                                                                                           │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:38 UTC │
	│ start   │ -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:38:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:38:16.724969  216447 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:38:16.725145  216447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:38:16.725157  216447 out.go:374] Setting ErrFile to fd 2...
	I1122 00:38:16.725163  216447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:38:16.725402  216447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:38:16.725798  216447 out.go:368] Setting JSON to false
	I1122 00:38:16.726726  216447 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4834,"bootTime":1763767063,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1122 00:38:16.726792  216447 start.go:143] virtualization:  
	I1122 00:38:16.730313  216447 out.go:179] * [embed-certs-540723] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:38:16.734885  216447 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:38:16.735035  216447 notify.go:221] Checking for updates...
	I1122 00:38:16.742549  216447 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:38:16.746070  216447 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:38:16.749321  216447 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1122 00:38:16.752546  216447 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:38:16.755738  216447 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:38:16.759353  216447 config.go:182] Loaded profile config "default-k8s-diff-port-080784": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:38:16.759481  216447 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:38:16.785364  216447 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:38:16.785622  216447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:38:16.848069  216447 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:38:16.837751425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:38:16.848177  216447 docker.go:319] overlay module found
	I1122 00:38:16.851526  216447 out.go:179] * Using the docker driver based on user configuration
	I1122 00:38:14.287881  213043 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:38:14.292452  213043 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:38:14.292474  213043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:38:14.309561  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:38:14.880071  213043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:38:14.880142  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:14.880212  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-080784 minikube.k8s.io/updated_at=2025_11_22T00_38_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=default-k8s-diff-port-080784 minikube.k8s.io/primary=true
	I1122 00:38:14.905591  213043 ops.go:34] apiserver oom_adj: -16
	I1122 00:38:15.334962  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:15.835022  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:16.335530  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:16.836187  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:16.854576  216447 start.go:309] selected driver: docker
	I1122 00:38:16.854594  216447 start.go:930] validating driver "docker" against <nil>
	I1122 00:38:16.854607  216447 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:38:16.855439  216447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:38:16.966333  216447 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:38:16.957247731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:38:16.966483  216447 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:38:16.966712  216447 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:38:16.969912  216447 out.go:179] * Using Docker driver with root privileges
	I1122 00:38:16.972856  216447 cni.go:84] Creating CNI manager for ""
	I1122 00:38:16.972928  216447 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:38:16.972942  216447 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:38:16.973031  216447 start.go:353] cluster config:
	{Name:embed-certs-540723 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:38:16.979240  216447 out.go:179] * Starting "embed-certs-540723" primary control-plane node in "embed-certs-540723" cluster
	I1122 00:38:16.982109  216447 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:38:16.985012  216447 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:38:16.987911  216447 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:38:16.987958  216447 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1122 00:38:16.987972  216447 cache.go:65] Caching tarball of preloaded images
	I1122 00:38:16.987983  216447 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:38:16.988067  216447 preload.go:238] Found /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1122 00:38:16.988078  216447 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:38:16.988189  216447 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/config.json ...
	I1122 00:38:16.988207  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/config.json: {Name:mke532fb35dfb339616ed8cd6aa11a6b4f357b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:17.010559  216447 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:38:17.010584  216447 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:38:17.010606  216447 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:38:17.010629  216447 start.go:360] acquireMachinesLock for embed-certs-540723: {Name:mk358644e8d9346f7e946c6076afa0430fba0d3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:38:17.010765  216447 start.go:364] duration metric: took 116.096µs to acquireMachinesLock for "embed-certs-540723"
	I1122 00:38:17.010808  216447 start.go:93] Provisioning new machine with config: &{Name:embed-certs-540723 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:38:17.010874  216447 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:38:17.335330  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:17.835060  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:18.336050  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:18.835071  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:19.335128  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:19.809086  213043 kubeadm.go:1114] duration metric: took 4.929007747s to wait for elevateKubeSystemPrivileges
	I1122 00:38:19.809120  213043 kubeadm.go:403] duration metric: took 24.28896765s to StartCluster
	I1122 00:38:19.809138  213043 settings.go:142] acquiring lock: {Name:mk5b79634916fd13f05f4c848ff3e8b07cafa39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:19.809216  213043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:38:19.809938  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:19.811994  213043 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:38:19.812124  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:38:19.812467  213043 config.go:182] Loaded profile config "default-k8s-diff-port-080784": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:38:19.812508  213043 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:38:19.812576  213043 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-080784"
	I1122 00:38:19.812590  213043 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-080784"
	I1122 00:38:19.812618  213043 host.go:66] Checking if "default-k8s-diff-port-080784" exists ...
	I1122 00:38:19.813163  213043 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:38:19.813675  213043 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-080784"
	I1122 00:38:19.813702  213043 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-080784"
	I1122 00:38:19.814006  213043 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:38:19.820826  213043 out.go:179] * Verifying Kubernetes components...
	I1122 00:38:19.828353  213043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:38:19.852142  213043 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:38:17.014410  216447 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:38:17.014663  216447 start.go:159] libmachine.API.Create for "embed-certs-540723" (driver="docker")
	I1122 00:38:17.014696  216447 client.go:173] LocalClient.Create starting
	I1122 00:38:17.014777  216447 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem
	I1122 00:38:17.014819  216447 main.go:143] libmachine: Decoding PEM data...
	I1122 00:38:17.014841  216447 main.go:143] libmachine: Parsing certificate...
	I1122 00:38:17.016356  216447 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem
	I1122 00:38:17.016410  216447 main.go:143] libmachine: Decoding PEM data...
	I1122 00:38:17.016428  216447 main.go:143] libmachine: Parsing certificate...
	I1122 00:38:17.016859  216447 cli_runner.go:164] Run: docker network inspect embed-certs-540723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:38:17.033109  216447 cli_runner.go:211] docker network inspect embed-certs-540723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:38:17.033212  216447 network_create.go:284] running [docker network inspect embed-certs-540723] to gather additional debugging logs...
	I1122 00:38:17.033237  216447 cli_runner.go:164] Run: docker network inspect embed-certs-540723
	W1122 00:38:17.051482  216447 cli_runner.go:211] docker network inspect embed-certs-540723 returned with exit code 1
	I1122 00:38:17.051514  216447 network_create.go:287] error running [docker network inspect embed-certs-540723]: docker network inspect embed-certs-540723: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-540723 not found
	I1122 00:38:17.051529  216447 network_create.go:289] output of [docker network inspect embed-certs-540723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-540723 not found
	
	** /stderr **
	I1122 00:38:17.051781  216447 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:38:17.069266  216447 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc891483483f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:f5:f5:5e:a2:12} reservation:<nil>}
	I1122 00:38:17.069805  216447 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dcada94e63da IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:bf:ad:c8:04:5e} reservation:<nil>}
	I1122 00:38:17.070332  216447 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7ab25f17f29c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:32:b1:2f:5f:ec} reservation:<nil>}
	I1122 00:38:17.070973  216447 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a280a0}
	I1122 00:38:17.071004  216447 network_create.go:124] attempt to create docker network embed-certs-540723 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1122 00:38:17.071087  216447 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-540723 embed-certs-540723
	I1122 00:38:17.134282  216447 network_create.go:108] docker network embed-certs-540723 192.168.76.0/24 created
	I1122 00:38:17.134322  216447 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-540723" container
	I1122 00:38:17.134418  216447 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:38:17.152204  216447 cli_runner.go:164] Run: docker volume create embed-certs-540723 --label name.minikube.sigs.k8s.io=embed-certs-540723 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:38:17.169709  216447 oci.go:103] Successfully created a docker volume embed-certs-540723
	I1122 00:38:17.169805  216447 cli_runner.go:164] Run: docker run --rm --name embed-certs-540723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-540723 --entrypoint /usr/bin/test -v embed-certs-540723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:38:17.749906  216447 oci.go:107] Successfully prepared a docker volume embed-certs-540723
	I1122 00:38:17.749991  216447 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:38:17.750008  216447 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:38:17.750083  216447 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-540723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:38:19.855178  213043 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:38:19.855199  213043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:38:19.855272  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:38:19.858445  213043 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-080784"
	I1122 00:38:19.858508  213043 host.go:66] Checking if "default-k8s-diff-port-080784" exists ...
	I1122 00:38:19.859014  213043 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:38:19.895213  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:38:19.910584  213043 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:38:19.910608  213043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:38:19.910692  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:38:19.939858  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:38:20.442140  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:38:20.641945  213043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:38:20.644595  213043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:38:20.661559  213043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:38:21.448312  213043 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.006067835s)
	I1122 00:38:21.448338  213043 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:38:21.449027  213043 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-080784" to be "Ready" ...
	I1122 00:38:21.978176  213043 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-080784" context rescaled to 1 replicas
	I1122 00:38:22.121759  213043 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.460100142s)
	I1122 00:38:22.143330  213043 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1122 00:38:22.504271  216447 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-540723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.754126796s)
	I1122 00:38:22.504306  216447 kic.go:203] duration metric: took 4.754294938s to extract preloaded images to volume ...
	W1122 00:38:22.504447  216447 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:38:22.504568  216447 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:38:22.566607  216447 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-540723 --name embed-certs-540723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-540723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-540723 --network embed-certs-540723 --ip 192.168.76.2 --volume embed-certs-540723:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:38:22.929612  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Running}}
	I1122 00:38:22.958271  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:22.979674  216447 cli_runner.go:164] Run: docker exec embed-certs-540723 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:38:23.046491  216447 oci.go:144] the created container "embed-certs-540723" has a running status.
	I1122 00:38:23.046528  216447 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa...
	I1122 00:38:23.443054  216447 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:38:23.469583  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:23.490215  216447 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:38:23.490251  216447 kic_runner.go:114] Args: [docker exec --privileged embed-certs-540723 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:38:23.555316  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:23.573560  216447 machine.go:94] provisionDockerMachine start ...
	I1122 00:38:23.573655  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:23.595112  216447 main.go:143] libmachine: Using SSH client type: native
	I1122 00:38:23.595445  216447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:38:23.595458  216447 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:38:23.596231  216447 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:38:22.191659  213043 addons.go:530] duration metric: took 2.379139476s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1122 00:38:23.454070  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:25.952294  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:26.739087  216447 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-540723
	
	I1122 00:38:26.739113  216447 ubuntu.go:182] provisioning hostname "embed-certs-540723"
	I1122 00:38:26.739190  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:26.758437  216447 main.go:143] libmachine: Using SSH client type: native
	I1122 00:38:26.758749  216447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:38:26.758766  216447 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-540723 && echo "embed-certs-540723" | sudo tee /etc/hostname
	I1122 00:38:26.909160  216447 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-540723
	
	I1122 00:38:26.909280  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:26.927909  216447 main.go:143] libmachine: Using SSH client type: native
	I1122 00:38:26.928223  216447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:38:26.928240  216447 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-540723' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-540723/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-540723' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:38:27.067945  216447 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:38:27.067968  216447 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-2332/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-2332/.minikube}
	I1122 00:38:27.068011  216447 ubuntu.go:190] setting up certificates
	I1122 00:38:27.068023  216447 provision.go:84] configureAuth start
	I1122 00:38:27.068096  216447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-540723
	I1122 00:38:27.085333  216447 provision.go:143] copyHostCerts
	I1122 00:38:27.085407  216447 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem, removing ...
	I1122 00:38:27.085422  216447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem
	I1122 00:38:27.085512  216447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem (1675 bytes)
	I1122 00:38:27.085615  216447 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem, removing ...
	I1122 00:38:27.085625  216447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem
	I1122 00:38:27.085655  216447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem (1078 bytes)
	I1122 00:38:27.085725  216447 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem, removing ...
	I1122 00:38:27.085734  216447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem
	I1122 00:38:27.085763  216447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem (1123 bytes)
	I1122 00:38:27.085822  216447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem org=jenkins.embed-certs-540723 san=[127.0.0.1 192.168.76.2 embed-certs-540723 localhost minikube]
	I1122 00:38:27.251405  216447 provision.go:177] copyRemoteCerts
	I1122 00:38:27.251480  216447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:38:27.251519  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.270171  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.371334  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:38:27.388811  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1122 00:38:27.407366  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:38:27.425162  216447 provision.go:87] duration metric: took 357.113917ms to configureAuth
	I1122 00:38:27.425192  216447 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:38:27.425402  216447 config.go:182] Loaded profile config "embed-certs-540723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:38:27.425414  216447 machine.go:97] duration metric: took 3.851830554s to provisionDockerMachine
	I1122 00:38:27.425421  216447 client.go:176] duration metric: took 10.41071079s to LocalClient.Create
	I1122 00:38:27.425441  216447 start.go:167] duration metric: took 10.410785277s to libmachine.API.Create "embed-certs-540723"
	I1122 00:38:27.425450  216447 start.go:293] postStartSetup for "embed-certs-540723" (driver="docker")
	I1122 00:38:27.425459  216447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:38:27.425508  216447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:38:27.425553  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.443017  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.548078  216447 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:38:27.551646  216447 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:38:27.551693  216447 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:38:27.551721  216447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/addons for local assets ...
	I1122 00:38:27.551803  216447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/files for local assets ...
	I1122 00:38:27.551930  216447 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem -> 56232.pem in /etc/ssl/certs
	I1122 00:38:27.552082  216447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:38:27.560107  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:38:27.579802  216447 start.go:296] duration metric: took 154.338128ms for postStartSetup
	I1122 00:38:27.580187  216447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-540723
	I1122 00:38:27.597480  216447 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/config.json ...
	I1122 00:38:27.597772  216447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:38:27.597823  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.615163  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.713345  216447 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:38:27.718217  216447 start.go:128] duration metric: took 10.707327523s to createHost
	I1122 00:38:27.718242  216447 start.go:83] releasing machines lock for "embed-certs-540723", held for 10.707462179s
	I1122 00:38:27.718341  216447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-540723
	I1122 00:38:27.735539  216447 ssh_runner.go:195] Run: cat /version.json
	I1122 00:38:27.735631  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.735721  216447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:38:27.735779  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.757717  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.769777  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.950768  216447 ssh_runner.go:195] Run: systemctl --version
	I1122 00:38:27.958039  216447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:38:27.962153  216447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:38:27.962219  216447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:38:27.992451  216447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:38:27.992523  216447 start.go:496] detecting cgroup driver to use...
	I1122 00:38:27.992569  216447 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:38:27.992624  216447 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:38:28.010012  216447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:38:28.024708  216447 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:38:28.024789  216447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:38:28.050340  216447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:38:28.074293  216447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:38:28.208582  216447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:38:28.341936  216447 docker.go:234] disabling docker service ...
	I1122 00:38:28.342028  216447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:38:28.366071  216447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:38:28.380541  216447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:38:28.506715  216447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:38:28.632854  216447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:38:28.645947  216447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:38:28.661609  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:38:28.671259  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:38:28.681790  216447 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1122 00:38:28.681899  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1122 00:38:28.691692  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:38:28.701452  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:38:28.710886  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:38:28.720844  216447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:38:28.729103  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:38:28.737886  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:38:28.746559  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:38:28.755737  216447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:38:28.763441  216447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:38:28.770908  216447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:38:28.883777  216447 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:38:29.011247  216447 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:38:29.011393  216447 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:38:29.015477  216447 start.go:564] Will wait 60s for crictl version
	I1122 00:38:29.015633  216447 ssh_runner.go:195] Run: which crictl
	I1122 00:38:29.019460  216447 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:38:29.050834  216447 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:38:29.050976  216447 ssh_runner.go:195] Run: containerd --version
	I1122 00:38:29.070794  216447 ssh_runner.go:195] Run: containerd --version
	I1122 00:38:29.095369  216447 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1122 00:38:29.098378  216447 cli_runner.go:164] Run: docker network inspect embed-certs-540723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:38:29.113773  216447 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:38:29.123795  216447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:38:29.135257  216447 kubeadm.go:884] updating cluster {Name:embed-certs-540723 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:38:29.135374  216447 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:38:29.135454  216447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:38:29.159277  216447 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:38:29.159301  216447 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:38:29.159357  216447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:38:29.183383  216447 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:38:29.183410  216447 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:38:29.183419  216447 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1122 00:38:29.183521  216447 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-540723 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:38:29.183619  216447 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:38:29.208453  216447 cni.go:84] Creating CNI manager for ""
	I1122 00:38:29.208476  216447 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:38:29.208493  216447 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:38:29.208519  216447 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-540723 NodeName:embed-certs-540723 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:38:29.208700  216447 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-540723"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:38:29.208776  216447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:38:29.217292  216447 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:38:29.217361  216447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:38:29.225202  216447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1122 00:38:29.238928  216447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:38:29.252105  216447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1122 00:38:29.264981  216447 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:38:29.268473  216447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:38:29.278421  216447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:38:29.399031  216447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:38:29.416942  216447 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723 for IP: 192.168.76.2
	I1122 00:38:29.416961  216447 certs.go:195] generating shared ca certs ...
	I1122 00:38:29.416976  216447 certs.go:227] acquiring lock for ca certs: {Name:mk348a892ec4309987f6c81ee1acef4884ca62db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:29.417164  216447 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key
	I1122 00:38:29.417241  216447 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key
	I1122 00:38:29.417256  216447 certs.go:257] generating profile certs ...
	I1122 00:38:29.417344  216447 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.key
	I1122 00:38:29.417369  216447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.crt with IP's: []
	I1122 00:38:29.893582  216447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.crt ...
	I1122 00:38:29.893617  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.crt: {Name:mk2416a47b0f5758cd518e373a1a7cfbde1b2b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:29.893816  216447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.key ...
	I1122 00:38:29.893829  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.key: {Name:mk5a7bf352867aa5d2d260c12df3c6ab92be563a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:29.893923  216447 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key.4b98241a
	I1122 00:38:29.893939  216447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt.4b98241a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1122 00:38:30.461772  216447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt.4b98241a ...
	I1122 00:38:30.461811  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt.4b98241a: {Name:mk94af0bb370789c91c7967f5aa0aa8ff27f5f3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:30.462010  216447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key.4b98241a ...
	I1122 00:38:30.462029  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key.4b98241a: {Name:mk10a968fb19cf2147a5cafa1ab9037d5d64e4cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:30.462124  216447 certs.go:382] copying /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt.4b98241a -> /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt
	I1122 00:38:30.462215  216447 certs.go:386] copying /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key.4b98241a -> /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key
	I1122 00:38:30.462277  216447 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.key
	I1122 00:38:30.462292  216447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.crt with IP's: []
	I1122 00:38:30.897714  216447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.crt ...
	I1122 00:38:30.897745  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.crt: {Name:mk0afb616fb35d112fca628ec947733ed0afff85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:30.897932  216447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.key ...
	I1122 00:38:30.897947  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.key: {Name:mk289c592c281514d8f849877dc292a05466ff16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:30.898150  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem (1338 bytes)
	W1122 00:38:30.898198  216447 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623_empty.pem, impossibly tiny 0 bytes
	I1122 00:38:30.898212  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:38:30.898238  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:38:30.898268  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:38:30.898295  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem (1675 bytes)
	I1122 00:38:30.898352  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:38:30.898904  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:38:30.918070  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:38:30.937778  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:38:30.963377  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:38:30.981609  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:38:31.000386  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:38:31.021496  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:38:31.050013  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:38:31.077463  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem --> /usr/share/ca-certificates/5623.pem (1338 bytes)
	I1122 00:38:31.099698  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /usr/share/ca-certificates/56232.pem (1708 bytes)
	I1122 00:38:31.123981  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:38:31.149736  216447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:38:31.163437  216447 ssh_runner.go:195] Run: openssl version
	I1122 00:38:31.170222  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5623.pem && ln -fs /usr/share/ca-certificates/5623.pem /etc/ssl/certs/5623.pem"
	I1122 00:38:31.178719  216447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5623.pem
	I1122 00:38:31.182379  216447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/5623.pem
	I1122 00:38:31.182492  216447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5623.pem
	I1122 00:38:31.228180  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5623.pem /etc/ssl/certs/51391683.0"
	I1122 00:38:31.236753  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56232.pem && ln -fs /usr/share/ca-certificates/56232.pem /etc/ssl/certs/56232.pem"
	I1122 00:38:31.245478  216447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56232.pem
	I1122 00:38:31.249331  216447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/56232.pem
	I1122 00:38:31.249450  216447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56232.pem
	I1122 00:38:31.290709  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56232.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:38:31.298862  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:38:31.307372  216447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:38:31.311326  216447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:38:31.311412  216447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:38:31.354058  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:38:31.362855  216447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:38:31.366395  216447 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:38:31.366456  216447 kubeadm.go:401] StartCluster: {Name:embed-certs-540723 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:38:31.366527  216447 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:38:31.366585  216447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:38:31.392804  216447 cri.go:89] found id: ""
	I1122 00:38:31.392940  216447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:38:31.400856  216447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:38:31.408855  216447 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:38:31.408919  216447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:38:31.417567  216447 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:38:31.417588  216447 kubeadm.go:158] found existing configuration files:
	
	I1122 00:38:31.417641  216447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:38:31.425895  216447 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:38:31.425975  216447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:38:31.434228  216447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:38:31.442150  216447 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:38:31.442224  216447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:38:31.450158  216447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:38:31.458958  216447 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:38:31.459133  216447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:38:31.467266  216447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:38:31.475682  216447 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:38:31.475748  216447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:38:31.483333  216447 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:38:31.523373  216447 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:38:31.523658  216447 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:38:31.549882  216447 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:38:31.549963  216447 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:38:31.550003  216447 kubeadm.go:319] OS: Linux
	I1122 00:38:31.550055  216447 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:38:31.550110  216447 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:38:31.550161  216447 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:38:31.550215  216447 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:38:31.550267  216447 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:38:31.550325  216447 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:38:31.550376  216447 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:38:31.550428  216447 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:38:31.550478  216447 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:38:31.618420  216447 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:38:31.618572  216447 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:38:31.618690  216447 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:38:31.635602  216447 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:38:31.642187  216447 out.go:252]   - Generating certificates and keys ...
	I1122 00:38:31.642381  216447 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:38:31.642501  216447 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1122 00:38:27.953681  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:30.453000  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:32.325683  216447 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:38:32.392825  216447 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:38:32.785449  216447 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:38:34.358616  216447 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:38:34.664341  216447 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:38:34.664793  216447 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-540723 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:38:35.326587  216447 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:38:35.326923  216447 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-540723 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:38:35.758667  216447 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:38:36.306274  216447 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	W1122 00:38:32.953224  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:35.452759  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:37.449914  216447 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:38:37.450212  216447 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:38:37.616534  216447 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:38:38.054605  216447 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:38:38.514951  216447 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:38:39.149223  216447 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:38:39.471045  216447 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:38:39.484535  216447 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:38:39.484640  216447 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:38:39.490664  216447 out.go:252]   - Booting up control plane ...
	I1122 00:38:39.490773  216447 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:38:39.490850  216447 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:38:39.490929  216447 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:38:39.504097  216447 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:38:39.504426  216447 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:38:39.512316  216447 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:38:39.512656  216447 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:38:39.512882  216447 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:38:39.651431  216447 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:38:39.651553  216447 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:38:40.155848  216447 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 504.258422ms
	I1122 00:38:40.159269  216447 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:38:40.159367  216447 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1122 00:38:40.159833  216447 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:38:40.159923  216447 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1122 00:38:37.453344  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:39.952701  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:44.827398  216447 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.667730898s
	I1122 00:38:46.377701  216447 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.218442351s
	W1122 00:38:42.452710  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:44.453047  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:48.161343  216447 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002051247s
	I1122 00:38:48.182719  216447 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:38:48.199784  216447 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:38:48.214838  216447 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:38:48.215060  216447 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-540723 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:38:48.226808  216447 kubeadm.go:319] [bootstrap-token] Using token: 72kwgl.63h5iuu326tbwoyb
	I1122 00:38:48.229739  216447 out.go:252]   - Configuring RBAC rules ...
	I1122 00:38:48.229875  216447 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:38:48.235547  216447 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:38:48.251747  216447 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:38:48.256076  216447 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:38:48.260398  216447 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:38:48.264668  216447 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:38:48.568928  216447 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:38:49.012559  216447 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:38:49.570954  216447 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:38:49.572226  216447 kubeadm.go:319] 
	I1122 00:38:49.572299  216447 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:38:49.572305  216447 kubeadm.go:319] 
	I1122 00:38:49.572382  216447 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:38:49.572387  216447 kubeadm.go:319] 
	I1122 00:38:49.572411  216447 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:38:49.572470  216447 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:38:49.572521  216447 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:38:49.572525  216447 kubeadm.go:319] 
	I1122 00:38:49.572585  216447 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:38:49.572590  216447 kubeadm.go:319] 
	I1122 00:38:49.572637  216447 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:38:49.572640  216447 kubeadm.go:319] 
	I1122 00:38:49.572692  216447 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:38:49.572767  216447 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:38:49.572835  216447 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:38:49.572839  216447 kubeadm.go:319] 
	I1122 00:38:49.572924  216447 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:38:49.573001  216447 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:38:49.573006  216447 kubeadm.go:319] 
	I1122 00:38:49.573090  216447 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 72kwgl.63h5iuu326tbwoyb \
	I1122 00:38:49.573193  216447 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad26553e08ef3801627a7166e0bb20bf24427585c6187a46d63e60c79d4d84c \
	I1122 00:38:49.573214  216447 kubeadm.go:319] 	--control-plane 
	I1122 00:38:49.573218  216447 kubeadm.go:319] 
	I1122 00:38:49.573302  216447 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:38:49.573306  216447 kubeadm.go:319] 
	I1122 00:38:49.573389  216447 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 72kwgl.63h5iuu326tbwoyb \
	I1122 00:38:49.573491  216447 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad26553e08ef3801627a7166e0bb20bf24427585c6187a46d63e60c79d4d84c 
	I1122 00:38:49.578352  216447 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1122 00:38:49.578585  216447 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:38:49.578692  216447 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:38:49.578712  216447 cni.go:84] Creating CNI manager for ""
	I1122 00:38:49.578731  216447 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:38:49.581781  216447 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:38:49.584672  216447 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:38:49.589124  216447 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:38:49.589145  216447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:38:49.606557  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:38:49.954312  216447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:38:49.954454  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:49.954553  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-540723 minikube.k8s.io/updated_at=2025_11_22T00_38_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=embed-certs-540723 minikube.k8s.io/primary=true
	I1122 00:38:50.220772  216447 ops.go:34] apiserver oom_adj: -16
	I1122 00:38:50.220893  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:50.721253  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:51.220969  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:51.721732  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1122 00:38:46.952514  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:49.451824  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:51.452290  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:52.221112  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:52.721227  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:53.221292  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:53.721901  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:54.221173  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:54.321511  216447 kubeadm.go:1114] duration metric: took 4.36711406s to wait for elevateKubeSystemPrivileges
	I1122 00:38:54.321544  216447 kubeadm.go:403] duration metric: took 22.955091646s to StartCluster
	I1122 00:38:54.321562  216447 settings.go:142] acquiring lock: {Name:mk5b79634916fd13f05f4c848ff3e8b07cafa39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:54.321628  216447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:38:54.322955  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:54.323217  216447 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:38:54.323307  216447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:38:54.323608  216447 config.go:182] Loaded profile config "embed-certs-540723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:38:54.323653  216447 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:38:54.323715  216447 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-540723"
	I1122 00:38:54.323730  216447 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-540723"
	I1122 00:38:54.323751  216447 host.go:66] Checking if "embed-certs-540723" exists ...
	I1122 00:38:54.324237  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:54.324765  216447 addons.go:70] Setting default-storageclass=true in profile "embed-certs-540723"
	I1122 00:38:54.324787  216447 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-540723"
	I1122 00:38:54.325094  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:54.327622  216447 out.go:179] * Verifying Kubernetes components...
	I1122 00:38:54.330984  216447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:38:54.359128  216447 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:38:54.366691  216447 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:38:54.366717  216447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:38:54.366780  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:54.388907  216447 addons.go:239] Setting addon default-storageclass=true in "embed-certs-540723"
	I1122 00:38:54.389018  216447 host.go:66] Checking if "embed-certs-540723" exists ...
	I1122 00:38:54.389597  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:54.421212  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:54.437710  216447 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:38:54.437735  216447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:38:54.437795  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:54.475094  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:54.612038  216447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:38:54.634516  216447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:38:54.638625  216447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:38:54.752274  216447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:38:55.298115  216447 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1122 00:38:55.300587  216447 node_ready.go:35] waiting up to 6m0s for node "embed-certs-540723" to be "Ready" ...
	I1122 00:38:55.680642  216447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.041982843s)
	I1122 00:38:55.693324  216447 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:38:55.696202  216447 addons.go:530] duration metric: took 1.372543284s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:38:55.802763  216447 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-540723" context rescaled to 1 replicas
	W1122 00:38:53.951990  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:56.452697  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:57.304244  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:38:59.805110  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:38:58.952063  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:39:00.952275  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:39:01.452150  213043 node_ready.go:49] node "default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:01.452176  213043 node_ready.go:38] duration metric: took 40.003129289s for node "default-k8s-diff-port-080784" to be "Ready" ...
	I1122 00:39:01.452191  213043 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:39:01.452247  213043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:39:01.465012  213043 api_server.go:72] duration metric: took 41.652970543s to wait for apiserver process to appear ...
	I1122 00:39:01.465037  213043 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:39:01.465057  213043 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1122 00:39:01.473970  213043 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1122 00:39:01.475088  213043 api_server.go:141] control plane version: v1.34.1
	I1122 00:39:01.475115  213043 api_server.go:131] duration metric: took 10.07016ms to wait for apiserver health ...
	I1122 00:39:01.475127  213043 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:39:01.478122  213043 system_pods.go:59] 8 kube-system pods found
	I1122 00:39:01.478163  213043 system_pods.go:61] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:01.478171  213043 system_pods.go:61] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:01.478176  213043 system_pods.go:61] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:01.478181  213043 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:01.478193  213043 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:01.478205  213043 system_pods.go:61] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:01.478209  213043 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:01.478215  213043 system_pods.go:61] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:39:01.478221  213043 system_pods.go:74] duration metric: took 3.088805ms to wait for pod list to return data ...
	I1122 00:39:01.478233  213043 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:39:01.480753  213043 default_sa.go:45] found service account: "default"
	I1122 00:39:01.480777  213043 default_sa.go:55] duration metric: took 2.537208ms for default service account to be created ...
	I1122 00:39:01.480787  213043 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:39:01.484034  213043 system_pods.go:86] 8 kube-system pods found
	I1122 00:39:01.484070  213043 system_pods.go:89] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:01.484077  213043 system_pods.go:89] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:01.484086  213043 system_pods.go:89] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:01.484092  213043 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:01.484097  213043 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:01.484101  213043 system_pods.go:89] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:01.484105  213043 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:01.484132  213043 system_pods.go:89] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:39:01.484158  213043 retry.go:31] will retry after 230.813177ms: missing components: kube-dns
	I1122 00:39:01.719021  213043 system_pods.go:86] 8 kube-system pods found
	I1122 00:39:01.719063  213043 system_pods.go:89] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:01.719076  213043 system_pods.go:89] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:01.719082  213043 system_pods.go:89] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:01.719088  213043 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:01.719092  213043 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:01.719101  213043 system_pods.go:89] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:01.719105  213043 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:01.719120  213043 system_pods.go:89] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:39:01.719141  213043 retry.go:31] will retry after 327.1869ms: missing components: kube-dns
	W1122 00:39:01.805167  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:04.304251  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:06.304378  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	I1122 00:39:02.051380  213043 system_pods.go:86] 8 kube-system pods found
	I1122 00:39:02.051419  213043 system_pods.go:89] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:02.051427  213043 system_pods.go:89] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:02.051434  213043 system_pods.go:89] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:02.051440  213043 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:02.051445  213043 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:02.051449  213043 system_pods.go:89] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:02.051453  213043 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:02.051459  213043 system_pods.go:89] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:39:02.051478  213043 retry.go:31] will retry after 373.645962ms: missing components: kube-dns
	I1122 00:39:02.429843  213043 system_pods.go:86] 8 kube-system pods found
	I1122 00:39:02.429883  213043 system_pods.go:89] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:02.429891  213043 system_pods.go:89] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:02.429897  213043 system_pods.go:89] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:02.429902  213043 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:02.429906  213043 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:02.429911  213043 system_pods.go:89] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:02.429915  213043 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:02.429919  213043 system_pods.go:89] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Running
	I1122 00:39:02.429927  213043 system_pods.go:126] duration metric: took 949.133593ms to wait for k8s-apps to be running ...
	I1122 00:39:02.429939  213043 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:39:02.429997  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:39:02.443263  213043 system_svc.go:56] duration metric: took 13.314939ms WaitForService to wait for kubelet
	I1122 00:39:02.443294  213043 kubeadm.go:587] duration metric: took 42.631253498s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:39:02.443312  213043 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:39:02.446431  213043 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:39:02.446465  213043 node_conditions.go:123] node cpu capacity is 2
	I1122 00:39:02.446478  213043 node_conditions.go:105] duration metric: took 3.161093ms to run NodePressure ...
	I1122 00:39:02.446492  213043 start.go:242] waiting for startup goroutines ...
	I1122 00:39:02.446499  213043 start.go:247] waiting for cluster config update ...
	I1122 00:39:02.446510  213043 start.go:256] writing updated cluster config ...
	I1122 00:39:02.446819  213043 ssh_runner.go:195] Run: rm -f paused
	I1122 00:39:02.450755  213043 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:39:02.454937  213043 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cg98c" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:39:04.460325  213043 pod_ready.go:104] pod "coredns-66bc5c9577-cg98c" is not "Ready", error: <nil>
	W1122 00:39:06.461048  213043 pod_ready.go:104] pod "coredns-66bc5c9577-cg98c" is not "Ready", error: <nil>
	W1122 00:39:08.304811  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:10.804148  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:08.960506  213043 pod_ready.go:104] pod "coredns-66bc5c9577-cg98c" is not "Ready", error: <nil>
	W1122 00:39:11.460496  213043 pod_ready.go:104] pod "coredns-66bc5c9577-cg98c" is not "Ready", error: <nil>
	I1122 00:39:12.460460  213043 pod_ready.go:94] pod "coredns-66bc5c9577-cg98c" is "Ready"
	I1122 00:39:12.460490  213043 pod_ready.go:86] duration metric: took 10.005524324s for pod "coredns-66bc5c9577-cg98c" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.463266  213043 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.468635  213043 pod_ready.go:94] pod "etcd-default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:12.468662  213043 pod_ready.go:86] duration metric: took 5.367762ms for pod "etcd-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.471150  213043 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.476063  213043 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:12.476093  213043 pod_ready.go:86] duration metric: took 4.911599ms for pod "kube-apiserver-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.478325  213043 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.658209  213043 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:12.658235  213043 pod_ready.go:86] duration metric: took 179.881353ms for pod "kube-controller-manager-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.858817  213043 pod_ready.go:83] waiting for pod "kube-proxy-l9z8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:13.258062  213043 pod_ready.go:94] pod "kube-proxy-l9z8d" is "Ready"
	I1122 00:39:13.258088  213043 pod_ready.go:86] duration metric: took 399.246444ms for pod "kube-proxy-l9z8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:13.458314  213043 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:13.858666  213043 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:13.858699  213043 pod_ready.go:86] duration metric: took 400.34227ms for pod "kube-scheduler-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:13.858714  213043 pod_ready.go:40] duration metric: took 11.407928369s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:39:13.931811  213043 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:39:13.936087  213043 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-080784" cluster and "default" namespace by default
	W1122 00:39:12.804205  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:14.804326  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:16.805094  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:19.304673  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:21.304729  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2efaa57a3d019       1611cd07b61d5       7 seconds ago        Running             busybox                   0                   594b8e5974257       busybox                                                default
	0d64e81dc0ad8       138784d87c9c5       22 seconds ago       Running             coredns                   0                   07d41afbffbbf       coredns-66bc5c9577-cg98c                               kube-system
	3951649c708fd       ba04bb24b9575       22 seconds ago       Running             storage-provisioner       0                   5476027915aa3       storage-provisioner                                    kube-system
	252561cb6cab2       b1a8c6f707935       About a minute ago   Running             kindnet-cni               0                   4bfb4d82a2f10       kindnet-cgr2l                                          kube-system
	3b6f77ac2c3c3       05baa95f5142d       About a minute ago   Running             kube-proxy                0                   6a4366acebe4b       kube-proxy-l9z8d                                       kube-system
	d9ab3ff2e6b49       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   13b2c09911bdf       kube-scheduler-default-k8s-diff-port-080784            kube-system
	1d53549631ceb       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   ac68fc1346348       kube-apiserver-default-k8s-diff-port-080784            kube-system
	e6948214b5c72       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   6aef774c41c11       kube-controller-manager-default-k8s-diff-port-080784   kube-system
	1f283db038f66       a1894772a478e       About a minute ago   Running             etcd                      0                   27ee9c78fecad       etcd-default-k8s-diff-port-080784                      kube-system
	
	
	==> containerd <==
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.854880143Z" level=info msg="StartContainer for \"3951649c708fd5267d0ad4e41c0bfc6891129e251b06f5aa4d39d9c92c5aefd9\""
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.856388127Z" level=info msg="connecting to shim 3951649c708fd5267d0ad4e41c0bfc6891129e251b06f5aa4d39d9c92c5aefd9" address="unix:///run/containerd/s/f33cf1c7adbdc4e7f35afd586b0ffc559ff0fce9efc228258407bdc4102469b3" protocol=ttrpc version=3
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.857696641Z" level=info msg="CreateContainer within sandbox \"07d41afbffbbf64cbccdda513f865ef5a1d87bf97f3a13da6ae2dedc71063a50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.875157376Z" level=info msg="Container 0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.889962666Z" level=info msg="CreateContainer within sandbox \"07d41afbffbbf64cbccdda513f865ef5a1d87bf97f3a13da6ae2dedc71063a50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed\""
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.891168869Z" level=info msg="StartContainer for \"0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed\""
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.893186267Z" level=info msg="connecting to shim 0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed" address="unix:///run/containerd/s/ab8916ef6eecc15b7daf818516f8efae5f73a7b9ec0b75b83192d6e943822f50" protocol=ttrpc version=3
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.982250457Z" level=info msg="StartContainer for \"3951649c708fd5267d0ad4e41c0bfc6891129e251b06f5aa4d39d9c92c5aefd9\" returns successfully"
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.983340056Z" level=info msg="StartContainer for \"0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed\" returns successfully"
	Nov 22 00:39:14 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:14.537580307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2004090a-bf01-4959-8a39-43712a0513ef,Namespace:default,Attempt:0,}"
	Nov 22 00:39:14 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:14.598371289Z" level=info msg="connecting to shim 594b8e597425723af373549357bf05fcab57bf8248f1ce7f9b1771bc1be1c3c0" address="unix:///run/containerd/s/a381171f79942286ac86de984d662cb9f01484f7d8fbf9f432477df719c8408e" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:39:14 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:14.659836898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2004090a-bf01-4959-8a39-43712a0513ef,Namespace:default,Attempt:0,} returns sandbox id \"594b8e597425723af373549357bf05fcab57bf8248f1ce7f9b1771bc1be1c3c0\""
	Nov 22 00:39:14 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:14.662707887Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.616165605Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.617969289Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937185"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.620240483Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.623371430Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.624235073Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 1.961280527s"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.624281465Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.631002755Z" level=info msg="CreateContainer within sandbox \"594b8e597425723af373549357bf05fcab57bf8248f1ce7f9b1771bc1be1c3c0\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.645784749Z" level=info msg="Container 2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.658139412Z" level=info msg="CreateContainer within sandbox \"594b8e597425723af373549357bf05fcab57bf8248f1ce7f9b1771bc1be1c3c0\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d\""
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.659242303Z" level=info msg="StartContainer for \"2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d\""
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.660483165Z" level=info msg="connecting to shim 2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d" address="unix:///run/containerd/s/a381171f79942286ac86de984d662cb9f01484f7d8fbf9f432477df719c8408e" protocol=ttrpc version=3
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.722622949Z" level=info msg="StartContainer for \"2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d\" returns successfully"
	
	
	==> coredns [0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39659 - 41661 "HINFO IN 1977687106285590801.1285743489587120048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012847856s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-080784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-080784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-080784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_38_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:38:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-080784
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:39:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:39:01 +0000   Sat, 22 Nov 2025 00:38:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:39:01 +0000   Sat, 22 Nov 2025 00:38:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:39:01 +0000   Sat, 22 Nov 2025 00:38:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:39:01 +0000   Sat, 22 Nov 2025 00:39:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-080784
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                9cd3571d-d1d2-40b1-b21c-06f427a0bd0e
	  Boot ID:                    4e86741a-5896-4eb6-97ce-70ea8beedc67
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-cg98c                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     65s
	  kube-system                 etcd-default-k8s-diff-port-080784                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         73s
	  kube-system                 kindnet-cgr2l                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      65s
	  kube-system                 kube-apiserver-default-k8s-diff-port-080784             250m (12%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-080784    200m (10%)    0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 kube-proxy-l9z8d                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         65s
	  kube-system                 kube-scheduler-default-k8s-diff-port-080784             100m (5%)     0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         62s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 63s                kube-proxy       
	  Normal   NodeAllocatableEnforced  81s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 81s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  81s (x8 over 81s)  kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    81s (x8 over 81s)  kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     81s (x7 over 81s)  kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasSufficientPID
	  Normal   Starting                 81s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 71s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 71s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  70s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  70s                kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    70s                kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     70s                kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           66s                node-controller  Node default-k8s-diff-port-080784 event: Registered Node default-k8s-diff-port-080784 in Controller
	  Normal   NodeReady                23s                kubelet          Node default-k8s-diff-port-080784 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.017121] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498034] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037542] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808656] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.648915] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 23:58] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=00000000f9ea0775{9P.session} n=0000000035823f74
	[  +0.001177] FS-Cache: O-key=[10] '34323935353131333738'
	[  +0.000819] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000f9ea0775{9P.session} n=00000000dbfd8515
	[  +0.001154] FS-Cache: N-key=[10] '34323935353131333738'
	[Nov22 00:00] hrtimer: interrupt took 9958927 ns
	
	
	==> etcd [1f283db038f6611fb92be8c77623b177cb33d57f8a5645f03b6d191a2594fc2d] <==
	{"level":"warn","ts":"2025-11-22T00:38:08.447722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.493267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.545299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.554590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.591276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.631666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.649527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.690780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.720319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.749025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.776689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.810026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.828167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.868796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.887690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.916649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.938555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.966056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.980419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.016845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.042331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.072639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.094192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.122418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.289578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42232","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:39:24 up  1:21,  0 user,  load average: 2.54, 3.46, 2.89
	Linux default-k8s-diff-port-080784 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [252561cb6cab27d6a08d413150f7d821814252ec16e8d8b445220ccf8ed920c2] <==
	I1122 00:38:21.022804       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:38:21.023061       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:38:21.023178       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:38:21.023188       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:38:21.023204       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:38:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:38:21.225273       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:38:21.225390       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:38:21.225451       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:38:21.312520       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:38:51.225520       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:38:51.312907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1122 00:38:51.313236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:38:51.313491       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1122 00:38:52.726465       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:38:52.726523       1 metrics.go:72] Registering metrics
	I1122 00:38:52.726613       1 controller.go:711] "Syncing nftables rules"
	I1122 00:39:01.225186       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:39:01.225235       1 main.go:301] handling current node
	I1122 00:39:11.232132       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:39:11.232208       1 main.go:301] handling current node
	I1122 00:39:21.226548       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:39:21.226695       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d53549631ceb733afa5892dc05607424c2b5352e3b607632d6fe7db11205546] <==
	I1122 00:38:11.050303       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:38:11.053370       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:38:11.121823       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:11.122202       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:38:11.139252       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:38:11.211003       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:11.211780       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:38:11.328609       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:38:11.387845       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:38:11.388048       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:38:12.535754       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:38:12.628899       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:38:12.737682       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:38:12.751242       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:38:12.753242       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:38:12.766508       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:38:13.450047       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:38:13.696935       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:38:13.749148       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:38:13.769987       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:38:18.756129       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:18.763071       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:19.051157       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:38:19.201072       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1122 00:39:23.365523       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:38330: use of closed network connection
	
	
	==> kube-controller-manager [e6948214b5c72c4b8f9a109a57b816f6a486408644295454dbb384df552ea8d7] <==
	I1122 00:38:18.537276       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:38:18.543731       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:38:18.544130       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:38:18.544231       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:38:18.547506       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:38:18.547532       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:38:18.548539       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:38:18.548607       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:38:18.548637       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:38:18.549045       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:38:18.554347       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:38:18.554512       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:38:18.554548       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:38:18.554586       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:38:18.554603       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:38:18.554608       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:38:18.554613       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:38:18.566794       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:38:18.566976       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:38:18.566987       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:38:18.566993       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:38:18.575664       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:38:18.601090       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-080784" podCIDRs=["10.244.0.0/24"]
	I1122 00:38:18.625955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:39:03.528619       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3b6f77ac2c3c3d3ce2d9fb2efa01e84808ffcdc9a6c4657767c211ebd5bddbd1] <==
	I1122 00:38:21.135387       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:38:21.235754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:38:21.337338       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:38:21.337404       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:38:21.337517       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:38:21.380814       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:38:21.380865       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:38:21.387384       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:38:21.387899       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:38:21.387927       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:38:21.392915       1 config.go:200] "Starting service config controller"
	I1122 00:38:21.392931       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:38:21.392947       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:38:21.392951       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:38:21.392962       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:38:21.392966       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:38:21.393793       1 config.go:309] "Starting node config controller"
	I1122 00:38:21.393802       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:38:21.393809       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:38:21.493623       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:38:21.493675       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:38:21.493716       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d9ab3ff2e6b49bf65ed2711f9dfb88ffa0b207339e178767951977bb5979d8bb] <==
	I1122 00:38:11.396699       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:38:11.400967       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:38:11.401175       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:38:11.433657       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:38:11.401193       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:38:11.405157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:38:11.432541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:38:11.432889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:38:11.432480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:38:11.451121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:38:11.452051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:38:11.451909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:38:11.452370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:38:11.452463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:38:11.452711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:38:11.452914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:38:11.452971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:38:11.453006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:38:11.453054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:38:11.453109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:38:11.451831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:38:11.453154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:38:11.454459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:38:11.457114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1122 00:38:13.134203       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:38:15 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:15.100926    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-080784" podStartSLOduration=1.099001425 podStartE2EDuration="1.099001425s" podCreationTimestamp="2025-11-22 00:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:15.068950829 +0000 UTC m=+1.488545990" watchObservedRunningTime="2025-11-22 00:38:15.099001425 +0000 UTC m=+1.518596619"
	Nov 22 00:38:15 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:15.140076    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-080784" podStartSLOduration=1.140055734 podStartE2EDuration="1.140055734s" podCreationTimestamp="2025-11-22 00:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:15.136648212 +0000 UTC m=+1.556243398" watchObservedRunningTime="2025-11-22 00:38:15.140055734 +0000 UTC m=+1.559650903"
	Nov 22 00:38:15 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:15.143979    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-080784" podStartSLOduration=4.143960821 podStartE2EDuration="4.143960821s" podCreationTimestamp="2025-11-22 00:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:15.104350521 +0000 UTC m=+1.523945673" watchObservedRunningTime="2025-11-22 00:38:15.143960821 +0000 UTC m=+1.563555982"
	Nov 22 00:38:15 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:15.176459    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-080784" podStartSLOduration=1.176439857 podStartE2EDuration="1.176439857s" podCreationTimestamp="2025-11-22 00:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:15.157196327 +0000 UTC m=+1.576791512" watchObservedRunningTime="2025-11-22 00:38:15.176439857 +0000 UTC m=+1.596035010"
	Nov 22 00:38:18 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:18.638168    1479 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:38:18 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:18.643831    1479 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319601    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0dd2f6cd-8657-48d2-940c-c4cd2e89d63d-cni-cfg\") pod \"kindnet-cgr2l\" (UID: \"0dd2f6cd-8657-48d2-940c-c4cd2e89d63d\") " pod="kube-system/kindnet-cgr2l"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319672    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq5pf\" (UniqueName: \"kubernetes.io/projected/0dd2f6cd-8657-48d2-940c-c4cd2e89d63d-kube-api-access-tq5pf\") pod \"kindnet-cgr2l\" (UID: \"0dd2f6cd-8657-48d2-940c-c4cd2e89d63d\") " pod="kube-system/kindnet-cgr2l"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319699    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d948362-27cd-47c6-8af3-a61fd3ef1c51-xtables-lock\") pod \"kube-proxy-l9z8d\" (UID: \"5d948362-27cd-47c6-8af3-a61fd3ef1c51\") " pod="kube-system/kube-proxy-l9z8d"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319733    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dd2f6cd-8657-48d2-940c-c4cd2e89d63d-lib-modules\") pod \"kindnet-cgr2l\" (UID: \"0dd2f6cd-8657-48d2-940c-c4cd2e89d63d\") " pod="kube-system/kindnet-cgr2l"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319752    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d948362-27cd-47c6-8af3-a61fd3ef1c51-kube-proxy\") pod \"kube-proxy-l9z8d\" (UID: \"5d948362-27cd-47c6-8af3-a61fd3ef1c51\") " pod="kube-system/kube-proxy-l9z8d"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319769    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d948362-27cd-47c6-8af3-a61fd3ef1c51-lib-modules\") pod \"kube-proxy-l9z8d\" (UID: \"5d948362-27cd-47c6-8af3-a61fd3ef1c51\") " pod="kube-system/kube-proxy-l9z8d"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319784    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqzv7\" (UniqueName: \"kubernetes.io/projected/5d948362-27cd-47c6-8af3-a61fd3ef1c51-kube-api-access-cqzv7\") pod \"kube-proxy-l9z8d\" (UID: \"5d948362-27cd-47c6-8af3-a61fd3ef1c51\") " pod="kube-system/kube-proxy-l9z8d"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319846    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dd2f6cd-8657-48d2-940c-c4cd2e89d63d-xtables-lock\") pod \"kindnet-cgr2l\" (UID: \"0dd2f6cd-8657-48d2-940c-c4cd2e89d63d\") " pod="kube-system/kindnet-cgr2l"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.482457    1479 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:38:21 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:21.758012    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l9z8d" podStartSLOduration=2.757993228 podStartE2EDuration="2.757993228s" podCreationTimestamp="2025-11-22 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:21.142634346 +0000 UTC m=+7.562229498" watchObservedRunningTime="2025-11-22 00:38:21.757993228 +0000 UTC m=+8.177588381"
	Nov 22 00:38:22 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:22.488071    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cgr2l" podStartSLOduration=3.4880490699999998 podStartE2EDuration="3.48804907s" podCreationTimestamp="2025-11-22 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:22.349550108 +0000 UTC m=+8.769145261" watchObservedRunningTime="2025-11-22 00:38:22.48804907 +0000 UTC m=+8.907644223"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.331307    1479 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.433827    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80a5ce0f-6a18-4c4a-a32b-d664baef9ec4-config-volume\") pod \"coredns-66bc5c9577-cg98c\" (UID: \"80a5ce0f-6a18-4c4a-a32b-d664baef9ec4\") " pod="kube-system/coredns-66bc5c9577-cg98c"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.433882    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrkkn\" (UniqueName: \"kubernetes.io/projected/80a5ce0f-6a18-4c4a-a32b-d664baef9ec4-kube-api-access-vrkkn\") pod \"coredns-66bc5c9577-cg98c\" (UID: \"80a5ce0f-6a18-4c4a-a32b-d664baef9ec4\") " pod="kube-system/coredns-66bc5c9577-cg98c"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.433908    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-564wj\" (UniqueName: \"kubernetes.io/projected/c27df238-e4f6-41ab-84bf-86a694ffab65-kube-api-access-564wj\") pod \"storage-provisioner\" (UID: \"c27df238-e4f6-41ab-84bf-86a694ffab65\") " pod="kube-system/storage-provisioner"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.433933    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c27df238-e4f6-41ab-84bf-86a694ffab65-tmp\") pod \"storage-provisioner\" (UID: \"c27df238-e4f6-41ab-84bf-86a694ffab65\") " pod="kube-system/storage-provisioner"
	Nov 22 00:39:02 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:02.237527    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cg98c" podStartSLOduration=43.237509057 podStartE2EDuration="43.237509057s" podCreationTimestamp="2025-11-22 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:39:02.237128539 +0000 UTC m=+48.656723708" watchObservedRunningTime="2025-11-22 00:39:02.237509057 +0000 UTC m=+48.657104218"
	Nov 22 00:39:12 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:12.250559    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=50.250541573 podStartE2EDuration="50.250541573s" podCreationTimestamp="2025-11-22 00:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:39:02.257014989 +0000 UTC m=+48.676610199" watchObservedRunningTime="2025-11-22 00:39:12.250541573 +0000 UTC m=+58.670136726"
	Nov 22 00:39:14 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:14.316553    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxhm\" (UniqueName: \"kubernetes.io/projected/2004090a-bf01-4959-8a39-43712a0513ef-kube-api-access-5rxhm\") pod \"busybox\" (UID: \"2004090a-bf01-4959-8a39-43712a0513ef\") " pod="default/busybox"
	
	
	==> storage-provisioner [3951649c708fd5267d0ad4e41c0bfc6891129e251b06f5aa4d39d9c92c5aefd9] <==
	W1122 00:39:02.086347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:02.092316       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:39:02.181639       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-080784_279acefc-3577-4644-802b-e6b20a9acf49!
	W1122 00:39:04.095759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:04.100802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:06.104200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:06.108913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:08.112457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:08.119156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:10.124715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:10.129308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:12.133121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:12.141425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:14.147838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:14.166043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:16.168717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:16.175123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:18.179382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:18.186207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:20.188856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:20.193409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:22.197556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:22.202506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:24.206408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:24.212590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-080784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-080784
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-080784:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38",
	        "Created": "2025-11-22T00:37:47.326721111Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 213431,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:37:47.405552076Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38/hostname",
	        "HostsPath": "/var/lib/docker/containers/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38/hosts",
	        "LogPath": "/var/lib/docker/containers/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38/ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38-json.log",
	        "Name": "/default-k8s-diff-port-080784",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-080784:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-080784",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ac2a6eee5f6b29797effdf74d7d4eb22cc5a691b125e0e9dc4dfcc5691462a38",
	                "LowerDir": "/var/lib/docker/overlay2/90d75a3a12f9b620c4ff64f5acb73959349482996193aae272e4736aa79307da-init/diff:/var/lib/docker/overlay2/7cce95e9587a813ce5f3ee5f28c6de3b78ed608010774b6d981aecaad739a571/diff",
	                "MergedDir": "/var/lib/docker/overlay2/90d75a3a12f9b620c4ff64f5acb73959349482996193aae272e4736aa79307da/merged",
	                "UpperDir": "/var/lib/docker/overlay2/90d75a3a12f9b620c4ff64f5acb73959349482996193aae272e4736aa79307da/diff",
	                "WorkDir": "/var/lib/docker/overlay2/90d75a3a12f9b620c4ff64f5acb73959349482996193aae272e4736aa79307da/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-080784",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-080784/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-080784",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-080784",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-080784",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d0b4b9a9685daa50934a6cdbf7e954d3579b493735b3e580febbc2d178d98586",
	            "SandboxKey": "/var/run/docker/netns/d0b4b9a9685d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-080784": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ee:d8:a7:6a:56:f2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "791319b2b020217842d6d72bba721e8e9b81db7f24032687c53843e39473054c",
	                    "EndpointID": "ac053bd91ee2ee45e7f9fdad2f1462d803b9b1aa2c2d598764d8db0a32b6f2c2",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-080784",
	                        "ac2a6eee5f6b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-080784 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-080784 logs -n 25: (1.202055764s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p force-systemd-env-115975 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd                                                                                                                                    │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd                                                                                                                             │ kubernetes-upgrade-381698    │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │                     │
	│ start   │ -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd                                                                                                      │ kubernetes-upgrade-381698    │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p kubernetes-upgrade-381698                                                                                                                                                                                                                        │ kubernetes-upgrade-381698    │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ force-systemd-env-115975 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p force-systemd-env-115975                                                                                                                                                                                                                         │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-options-089440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ cert-options-089440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ -p cert-options-089440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ delete  │ -p cert-options-089440                                                                                                                                                                                                                              │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-187160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ stop    │ -p old-k8s-version-187160 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-187160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:37 UTC │
	│ image   │ old-k8s-version-187160 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ pause   │ -p old-k8s-version-187160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ unpause │ -p old-k8s-version-187160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ delete  │ -p old-k8s-version-187160                                                                                                                                                                                                                           │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ delete  │ -p old-k8s-version-187160                                                                                                                                                                                                                           │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ start   │ -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:39 UTC │
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:38 UTC │
	│ delete  │ -p cert-expiration-285797                                                                                                                                                                                                                           │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:38 UTC │
	│ start   │ -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:38:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:38:16.724969  216447 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:38:16.725145  216447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:38:16.725157  216447 out.go:374] Setting ErrFile to fd 2...
	I1122 00:38:16.725163  216447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:38:16.725402  216447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:38:16.725798  216447 out.go:368] Setting JSON to false
	I1122 00:38:16.726726  216447 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4834,"bootTime":1763767063,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1122 00:38:16.726792  216447 start.go:143] virtualization:  
	I1122 00:38:16.730313  216447 out.go:179] * [embed-certs-540723] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:38:16.734885  216447 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:38:16.735035  216447 notify.go:221] Checking for updates...
	I1122 00:38:16.742549  216447 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:38:16.746070  216447 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:38:16.749321  216447 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1122 00:38:16.752546  216447 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:38:16.755738  216447 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:38:16.759353  216447 config.go:182] Loaded profile config "default-k8s-diff-port-080784": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:38:16.759481  216447 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:38:16.785364  216447 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:38:16.785622  216447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:38:16.848069  216447 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:38:16.837751425 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:38:16.848177  216447 docker.go:319] overlay module found
	I1122 00:38:16.851526  216447 out.go:179] * Using the docker driver based on user configuration
	I1122 00:38:14.287881  213043 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:38:14.292452  213043 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:38:14.292474  213043 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:38:14.309561  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:38:14.880071  213043 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:38:14.880142  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:14.880212  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-080784 minikube.k8s.io/updated_at=2025_11_22T00_38_14_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=default-k8s-diff-port-080784 minikube.k8s.io/primary=true
	I1122 00:38:14.905591  213043 ops.go:34] apiserver oom_adj: -16
	I1122 00:38:15.334962  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:15.835022  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:16.335530  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:16.836187  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:16.854576  216447 start.go:309] selected driver: docker
	I1122 00:38:16.854594  216447 start.go:930] validating driver "docker" against <nil>
	I1122 00:38:16.854607  216447 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:38:16.855439  216447 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:38:16.966333  216447 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:38:16.957247731 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:38:16.966483  216447 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1122 00:38:16.966712  216447 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:38:16.969912  216447 out.go:179] * Using Docker driver with root privileges
	I1122 00:38:16.972856  216447 cni.go:84] Creating CNI manager for ""
	I1122 00:38:16.972928  216447 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:38:16.972942  216447 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1122 00:38:16.973031  216447 start.go:353] cluster config:
	{Name:embed-certs-540723 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:38:16.979240  216447 out.go:179] * Starting "embed-certs-540723" primary control-plane node in "embed-certs-540723" cluster
	I1122 00:38:16.982109  216447 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:38:16.985012  216447 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:38:16.987911  216447 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:38:16.987958  216447 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1122 00:38:16.987972  216447 cache.go:65] Caching tarball of preloaded images
	I1122 00:38:16.987983  216447 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:38:16.988067  216447 preload.go:238] Found /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1122 00:38:16.988078  216447 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:38:16.988189  216447 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/config.json ...
	I1122 00:38:16.988207  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/config.json: {Name:mke532fb35dfb339616ed8cd6aa11a6b4f357b9b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:17.010559  216447 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:38:17.010584  216447 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:38:17.010606  216447 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:38:17.010629  216447 start.go:360] acquireMachinesLock for embed-certs-540723: {Name:mk358644e8d9346f7e946c6076afa0430fba0d3b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:38:17.010765  216447 start.go:364] duration metric: took 116.096µs to acquireMachinesLock for "embed-certs-540723"
	I1122 00:38:17.010808  216447 start.go:93] Provisioning new machine with config: &{Name:embed-certs-540723 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cus
tomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:38:17.010874  216447 start.go:125] createHost starting for "" (driver="docker")
	I1122 00:38:17.335330  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:17.835060  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:18.336050  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:18.835071  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:19.335128  213043 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:19.809086  213043 kubeadm.go:1114] duration metric: took 4.929007747s to wait for elevateKubeSystemPrivileges
	I1122 00:38:19.809120  213043 kubeadm.go:403] duration metric: took 24.28896765s to StartCluster
	I1122 00:38:19.809138  213043 settings.go:142] acquiring lock: {Name:mk5b79634916fd13f05f4c848ff3e8b07cafa39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:19.809216  213043 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:38:19.809938  213043 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:19.811994  213043 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:38:19.812124  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:38:19.812467  213043 config.go:182] Loaded profile config "default-k8s-diff-port-080784": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:38:19.812508  213043 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:38:19.812576  213043 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-080784"
	I1122 00:38:19.812590  213043 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-080784"
	I1122 00:38:19.812618  213043 host.go:66] Checking if "default-k8s-diff-port-080784" exists ...
	I1122 00:38:19.813163  213043 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:38:19.813675  213043 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-080784"
	I1122 00:38:19.813702  213043 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-080784"
	I1122 00:38:19.814006  213043 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:38:19.820826  213043 out.go:179] * Verifying Kubernetes components...
	I1122 00:38:19.828353  213043 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:38:19.852142  213043 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:38:17.014410  216447 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1122 00:38:17.014663  216447 start.go:159] libmachine.API.Create for "embed-certs-540723" (driver="docker")
	I1122 00:38:17.014696  216447 client.go:173] LocalClient.Create starting
	I1122 00:38:17.014777  216447 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem
	I1122 00:38:17.014819  216447 main.go:143] libmachine: Decoding PEM data...
	I1122 00:38:17.014841  216447 main.go:143] libmachine: Parsing certificate...
	I1122 00:38:17.016356  216447 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem
	I1122 00:38:17.016410  216447 main.go:143] libmachine: Decoding PEM data...
	I1122 00:38:17.016428  216447 main.go:143] libmachine: Parsing certificate...
	I1122 00:38:17.016859  216447 cli_runner.go:164] Run: docker network inspect embed-certs-540723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1122 00:38:17.033109  216447 cli_runner.go:211] docker network inspect embed-certs-540723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1122 00:38:17.033212  216447 network_create.go:284] running [docker network inspect embed-certs-540723] to gather additional debugging logs...
	I1122 00:38:17.033237  216447 cli_runner.go:164] Run: docker network inspect embed-certs-540723
	W1122 00:38:17.051482  216447 cli_runner.go:211] docker network inspect embed-certs-540723 returned with exit code 1
	I1122 00:38:17.051514  216447 network_create.go:287] error running [docker network inspect embed-certs-540723]: docker network inspect embed-certs-540723: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-540723 not found
	I1122 00:38:17.051529  216447 network_create.go:289] output of [docker network inspect embed-certs-540723]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-540723 not found
	
	** /stderr **
	I1122 00:38:17.051781  216447 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:38:17.069266  216447 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc891483483f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:f5:f5:5e:a2:12} reservation:<nil>}
	I1122 00:38:17.069805  216447 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dcada94e63da IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:8a:bf:ad:c8:04:5e} reservation:<nil>}
	I1122 00:38:17.070332  216447 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7ab25f17f29c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:da:32:b1:2f:5f:ec} reservation:<nil>}
	I1122 00:38:17.070973  216447 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001a280a0}
	I1122 00:38:17.071004  216447 network_create.go:124] attempt to create docker network embed-certs-540723 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1122 00:38:17.071087  216447 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-540723 embed-certs-540723
	I1122 00:38:17.134282  216447 network_create.go:108] docker network embed-certs-540723 192.168.76.0/24 created
	I1122 00:38:17.134322  216447 kic.go:121] calculated static IP "192.168.76.2" for the "embed-certs-540723" container
	I1122 00:38:17.134418  216447 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1122 00:38:17.152204  216447 cli_runner.go:164] Run: docker volume create embed-certs-540723 --label name.minikube.sigs.k8s.io=embed-certs-540723 --label created_by.minikube.sigs.k8s.io=true
	I1122 00:38:17.169709  216447 oci.go:103] Successfully created a docker volume embed-certs-540723
	I1122 00:38:17.169805  216447 cli_runner.go:164] Run: docker run --rm --name embed-certs-540723-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-540723 --entrypoint /usr/bin/test -v embed-certs-540723:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -d /var/lib
	I1122 00:38:17.749906  216447 oci.go:107] Successfully prepared a docker volume embed-certs-540723
	I1122 00:38:17.749991  216447 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:38:17.750008  216447 kic.go:194] Starting extracting preloaded images to volume ...
	I1122 00:38:17.750083  216447 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-540723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir
	I1122 00:38:19.855178  213043 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:38:19.855199  213043 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:38:19.855272  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:38:19.858445  213043 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-080784"
	I1122 00:38:19.858508  213043 host.go:66] Checking if "default-k8s-diff-port-080784" exists ...
	I1122 00:38:19.859014  213043 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:38:19.895213  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:38:19.910584  213043 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:38:19.910608  213043 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:38:19.910692  213043 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:38:19.939858  213043 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:38:20.442140  213043 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:38:20.641945  213043 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:38:20.644595  213043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:38:20.661559  213043 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:38:21.448312  213043 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.006067835s)
	I1122 00:38:21.448338  213043 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1122 00:38:21.449027  213043 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-080784" to be "Ready" ...
	I1122 00:38:21.978176  213043 kapi.go:214] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-080784" context rescaled to 1 replicas
	I1122 00:38:22.121759  213043 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.460100142s)
	I1122 00:38:22.143330  213043 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1122 00:38:22.504271  216447 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-540723:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e -I lz4 -xf /preloaded.tar -C /extractDir: (4.754126796s)
	I1122 00:38:22.504306  216447 kic.go:203] duration metric: took 4.754294938s to extract preloaded images to volume ...
	W1122 00:38:22.504447  216447 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1122 00:38:22.504568  216447 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1122 00:38:22.566607  216447 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-540723 --name embed-certs-540723 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-540723 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-540723 --network embed-certs-540723 --ip 192.168.76.2 --volume embed-certs-540723:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e
	I1122 00:38:22.929612  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Running}}
	I1122 00:38:22.958271  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:22.979674  216447 cli_runner.go:164] Run: docker exec embed-certs-540723 stat /var/lib/dpkg/alternatives/iptables
	I1122 00:38:23.046491  216447 oci.go:144] the created container "embed-certs-540723" has a running status.
	I1122 00:38:23.046528  216447 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa...
	I1122 00:38:23.443054  216447 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1122 00:38:23.469583  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:23.490215  216447 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1122 00:38:23.490251  216447 kic_runner.go:114] Args: [docker exec --privileged embed-certs-540723 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1122 00:38:23.555316  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:23.573560  216447 machine.go:94] provisionDockerMachine start ...
	I1122 00:38:23.573655  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:23.595112  216447 main.go:143] libmachine: Using SSH client type: native
	I1122 00:38:23.595445  216447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:38:23.595458  216447 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:38:23.596231  216447 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1122 00:38:22.191659  213043 addons.go:530] duration metric: took 2.379139476s for enable addons: enabled=[default-storageclass storage-provisioner]
	W1122 00:38:23.454070  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:25.952294  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:26.739087  216447 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-540723
	
	I1122 00:38:26.739113  216447 ubuntu.go:182] provisioning hostname "embed-certs-540723"
	I1122 00:38:26.739190  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:26.758437  216447 main.go:143] libmachine: Using SSH client type: native
	I1122 00:38:26.758749  216447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:38:26.758766  216447 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-540723 && echo "embed-certs-540723" | sudo tee /etc/hostname
	I1122 00:38:26.909160  216447 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-540723
	
	I1122 00:38:26.909280  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:26.927909  216447 main.go:143] libmachine: Using SSH client type: native
	I1122 00:38:26.928223  216447 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1122 00:38:26.928240  216447 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-540723' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-540723/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-540723' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:38:27.067945  216447 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:38:27.067968  216447 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-2332/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-2332/.minikube}
	I1122 00:38:27.068011  216447 ubuntu.go:190] setting up certificates
	I1122 00:38:27.068023  216447 provision.go:84] configureAuth start
	I1122 00:38:27.068096  216447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-540723
	I1122 00:38:27.085333  216447 provision.go:143] copyHostCerts
	I1122 00:38:27.085407  216447 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem, removing ...
	I1122 00:38:27.085422  216447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem
	I1122 00:38:27.085512  216447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem (1675 bytes)
	I1122 00:38:27.085615  216447 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem, removing ...
	I1122 00:38:27.085625  216447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem
	I1122 00:38:27.085655  216447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem (1078 bytes)
	I1122 00:38:27.085725  216447 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem, removing ...
	I1122 00:38:27.085734  216447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem
	I1122 00:38:27.085763  216447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem (1123 bytes)
	I1122 00:38:27.085822  216447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem org=jenkins.embed-certs-540723 san=[127.0.0.1 192.168.76.2 embed-certs-540723 localhost minikube]
	I1122 00:38:27.251405  216447 provision.go:177] copyRemoteCerts
	I1122 00:38:27.251480  216447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:38:27.251519  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.270171  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.371334  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:38:27.388811  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1122 00:38:27.407366  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:38:27.425162  216447 provision.go:87] duration metric: took 357.113917ms to configureAuth
	I1122 00:38:27.425192  216447 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:38:27.425402  216447 config.go:182] Loaded profile config "embed-certs-540723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:38:27.425414  216447 machine.go:97] duration metric: took 3.851830554s to provisionDockerMachine
	I1122 00:38:27.425421  216447 client.go:176] duration metric: took 10.41071079s to LocalClient.Create
	I1122 00:38:27.425441  216447 start.go:167] duration metric: took 10.410785277s to libmachine.API.Create "embed-certs-540723"
	I1122 00:38:27.425450  216447 start.go:293] postStartSetup for "embed-certs-540723" (driver="docker")
	I1122 00:38:27.425459  216447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:38:27.425508  216447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:38:27.425553  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.443017  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.548078  216447 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:38:27.551646  216447 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:38:27.551693  216447 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:38:27.551721  216447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/addons for local assets ...
	I1122 00:38:27.551803  216447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/files for local assets ...
	I1122 00:38:27.551930  216447 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem -> 56232.pem in /etc/ssl/certs
	I1122 00:38:27.552082  216447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:38:27.560107  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:38:27.579802  216447 start.go:296] duration metric: took 154.338128ms for postStartSetup
	I1122 00:38:27.580187  216447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-540723
	I1122 00:38:27.597480  216447 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/config.json ...
	I1122 00:38:27.597772  216447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:38:27.597823  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.615163  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.713345  216447 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:38:27.718217  216447 start.go:128] duration metric: took 10.707327523s to createHost
	I1122 00:38:27.718242  216447 start.go:83] releasing machines lock for "embed-certs-540723", held for 10.707462179s
	I1122 00:38:27.718341  216447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-540723
	I1122 00:38:27.735539  216447 ssh_runner.go:195] Run: cat /version.json
	I1122 00:38:27.735631  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.735721  216447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:38:27.735779  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:27.757717  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.769777  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:27.950768  216447 ssh_runner.go:195] Run: systemctl --version
	I1122 00:38:27.958039  216447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:38:27.962153  216447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:38:27.962219  216447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:38:27.992451  216447 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/10-crio-bridge.conflist.disabled] bridge cni config(s)
	I1122 00:38:27.992523  216447 start.go:496] detecting cgroup driver to use...
	I1122 00:38:27.992569  216447 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:38:27.992624  216447 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:38:28.010012  216447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:38:28.024708  216447 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:38:28.024789  216447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:38:28.050340  216447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:38:28.074293  216447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:38:28.208582  216447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:38:28.341936  216447 docker.go:234] disabling docker service ...
	I1122 00:38:28.342028  216447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:38:28.366071  216447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:38:28.380541  216447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:38:28.506715  216447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:38:28.632854  216447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:38:28.645947  216447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:38:28.661609  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:38:28.671259  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:38:28.681790  216447 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1122 00:38:28.681899  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1122 00:38:28.691692  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:38:28.701452  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:38:28.710886  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:38:28.720844  216447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:38:28.729103  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:38:28.737886  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:38:28.746559  216447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:38:28.755737  216447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:38:28.763441  216447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:38:28.770908  216447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:38:28.883777  216447 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:38:29.011247  216447 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:38:29.011393  216447 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:38:29.015477  216447 start.go:564] Will wait 60s for crictl version
	I1122 00:38:29.015633  216447 ssh_runner.go:195] Run: which crictl
	I1122 00:38:29.019460  216447 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:38:29.050834  216447 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:38:29.050976  216447 ssh_runner.go:195] Run: containerd --version
	I1122 00:38:29.070794  216447 ssh_runner.go:195] Run: containerd --version
	I1122 00:38:29.095369  216447 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1122 00:38:29.098378  216447 cli_runner.go:164] Run: docker network inspect embed-certs-540723 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:38:29.113773  216447 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:38:29.123795  216447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:38:29.135257  216447 kubeadm.go:884] updating cluster {Name:embed-certs-540723 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:38:29.135374  216447 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:38:29.135454  216447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:38:29.159277  216447 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:38:29.159301  216447 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:38:29.159357  216447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:38:29.183383  216447 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:38:29.183410  216447 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:38:29.183419  216447 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1122 00:38:29.183521  216447 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-540723 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:38:29.183619  216447 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:38:29.208453  216447 cni.go:84] Creating CNI manager for ""
	I1122 00:38:29.208476  216447 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:38:29.208493  216447 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:38:29.208519  216447 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-540723 NodeName:embed-certs-540723 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:38:29.208700  216447 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "embed-certs-540723"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:38:29.208776  216447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:38:29.217292  216447 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:38:29.217361  216447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:38:29.225202  216447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I1122 00:38:29.238928  216447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:38:29.252105  216447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2231 bytes)
	I1122 00:38:29.264981  216447 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:38:29.268473  216447 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:38:29.278421  216447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:38:29.399031  216447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:38:29.416942  216447 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723 for IP: 192.168.76.2
	I1122 00:38:29.416961  216447 certs.go:195] generating shared ca certs ...
	I1122 00:38:29.416976  216447 certs.go:227] acquiring lock for ca certs: {Name:mk348a892ec4309987f6c81ee1acef4884ca62db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:29.417164  216447 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key
	I1122 00:38:29.417241  216447 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key
	I1122 00:38:29.417256  216447 certs.go:257] generating profile certs ...
	I1122 00:38:29.417344  216447 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.key
	I1122 00:38:29.417369  216447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.crt with IP's: []
	I1122 00:38:29.893582  216447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.crt ...
	I1122 00:38:29.893617  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.crt: {Name:mk2416a47b0f5758cd518e373a1a7cfbde1b2b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:29.893816  216447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.key ...
	I1122 00:38:29.893829  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/client.key: {Name:mk5a7bf352867aa5d2d260c12df3c6ab92be563a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:29.893923  216447 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key.4b98241a
	I1122 00:38:29.893939  216447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt.4b98241a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1122 00:38:30.461772  216447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt.4b98241a ...
	I1122 00:38:30.461811  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt.4b98241a: {Name:mk94af0bb370789c91c7967f5aa0aa8ff27f5f3c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:30.462010  216447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key.4b98241a ...
	I1122 00:38:30.462029  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key.4b98241a: {Name:mk10a968fb19cf2147a5cafa1ab9037d5d64e4cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:30.462124  216447 certs.go:382] copying /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt.4b98241a -> /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt
	I1122 00:38:30.462215  216447 certs.go:386] copying /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key.4b98241a -> /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key
	I1122 00:38:30.462277  216447 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.key
	I1122 00:38:30.462292  216447 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.crt with IP's: []
	I1122 00:38:30.897714  216447 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.crt ...
	I1122 00:38:30.897745  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.crt: {Name:mk0afb616fb35d112fca628ec947733ed0afff85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:30.897932  216447 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.key ...
	I1122 00:38:30.897947  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.key: {Name:mk289c592c281514d8f849877dc292a05466ff16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:30.898150  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem (1338 bytes)
	W1122 00:38:30.898198  216447 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623_empty.pem, impossibly tiny 0 bytes
	I1122 00:38:30.898212  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:38:30.898238  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:38:30.898268  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:38:30.898295  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem (1675 bytes)
	I1122 00:38:30.898352  216447 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:38:30.898904  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:38:30.918070  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:38:30.937778  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:38:30.963377  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:38:30.981609  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1122 00:38:31.000386  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:38:31.021496  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:38:31.050013  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/embed-certs-540723/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:38:31.077463  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem --> /usr/share/ca-certificates/5623.pem (1338 bytes)
	I1122 00:38:31.099698  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /usr/share/ca-certificates/56232.pem (1708 bytes)
	I1122 00:38:31.123981  216447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:38:31.149736  216447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:38:31.163437  216447 ssh_runner.go:195] Run: openssl version
	I1122 00:38:31.170222  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5623.pem && ln -fs /usr/share/ca-certificates/5623.pem /etc/ssl/certs/5623.pem"
	I1122 00:38:31.178719  216447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5623.pem
	I1122 00:38:31.182379  216447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/5623.pem
	I1122 00:38:31.182492  216447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5623.pem
	I1122 00:38:31.228180  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5623.pem /etc/ssl/certs/51391683.0"
	I1122 00:38:31.236753  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56232.pem && ln -fs /usr/share/ca-certificates/56232.pem /etc/ssl/certs/56232.pem"
	I1122 00:38:31.245478  216447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56232.pem
	I1122 00:38:31.249331  216447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/56232.pem
	I1122 00:38:31.249450  216447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56232.pem
	I1122 00:38:31.290709  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56232.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:38:31.298862  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:38:31.307372  216447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:38:31.311326  216447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:38:31.311412  216447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:38:31.354058  216447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:38:31.362855  216447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:38:31.366395  216447 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1122 00:38:31.366456  216447 kubeadm.go:401] StartCluster: {Name:embed-certs-540723 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-540723 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:38:31.366527  216447 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:38:31.366585  216447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:38:31.392804  216447 cri.go:89] found id: ""
	I1122 00:38:31.392940  216447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:38:31.400856  216447 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1122 00:38:31.408855  216447 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1122 00:38:31.408919  216447 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1122 00:38:31.417567  216447 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1122 00:38:31.417588  216447 kubeadm.go:158] found existing configuration files:
	
	I1122 00:38:31.417641  216447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1122 00:38:31.425895  216447 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1122 00:38:31.425975  216447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1122 00:38:31.434228  216447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1122 00:38:31.442150  216447 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1122 00:38:31.442224  216447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1122 00:38:31.450158  216447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1122 00:38:31.458958  216447 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1122 00:38:31.459133  216447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1122 00:38:31.467266  216447 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1122 00:38:31.475682  216447 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1122 00:38:31.475748  216447 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1122 00:38:31.483333  216447 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1122 00:38:31.523373  216447 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1122 00:38:31.523658  216447 kubeadm.go:319] [preflight] Running pre-flight checks
	I1122 00:38:31.549882  216447 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1122 00:38:31.549963  216447 kubeadm.go:319] KERNEL_VERSION: 5.15.0-1084-aws
	I1122 00:38:31.550003  216447 kubeadm.go:319] OS: Linux
	I1122 00:38:31.550055  216447 kubeadm.go:319] CGROUPS_CPU: enabled
	I1122 00:38:31.550110  216447 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1122 00:38:31.550161  216447 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1122 00:38:31.550215  216447 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1122 00:38:31.550267  216447 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1122 00:38:31.550325  216447 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1122 00:38:31.550376  216447 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1122 00:38:31.550428  216447 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1122 00:38:31.550478  216447 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1122 00:38:31.618420  216447 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1122 00:38:31.618572  216447 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1122 00:38:31.618690  216447 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1122 00:38:31.635602  216447 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1122 00:38:31.642187  216447 out.go:252]   - Generating certificates and keys ...
	I1122 00:38:31.642381  216447 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1122 00:38:31.642501  216447 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	W1122 00:38:27.953681  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:30.453000  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:32.325683  216447 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1122 00:38:32.392825  216447 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1122 00:38:32.785449  216447 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1122 00:38:34.358616  216447 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1122 00:38:34.664341  216447 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1122 00:38:34.664793  216447 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-540723 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:38:35.326587  216447 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1122 00:38:35.326923  216447 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-540723 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1122 00:38:35.758667  216447 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1122 00:38:36.306274  216447 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	W1122 00:38:32.953224  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:35.452759  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:37.449914  216447 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1122 00:38:37.450212  216447 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1122 00:38:37.616534  216447 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1122 00:38:38.054605  216447 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1122 00:38:38.514951  216447 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1122 00:38:39.149223  216447 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1122 00:38:39.471045  216447 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1122 00:38:39.484535  216447 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1122 00:38:39.484640  216447 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1122 00:38:39.490664  216447 out.go:252]   - Booting up control plane ...
	I1122 00:38:39.490773  216447 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1122 00:38:39.490850  216447 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1122 00:38:39.490929  216447 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1122 00:38:39.504097  216447 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1122 00:38:39.504426  216447 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1122 00:38:39.512316  216447 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1122 00:38:39.512656  216447 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1122 00:38:39.512882  216447 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1122 00:38:39.651431  216447 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1122 00:38:39.651553  216447 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1122 00:38:40.155848  216447 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 504.258422ms
	I1122 00:38:40.159269  216447 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1122 00:38:40.159367  216447 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1122 00:38:40.159833  216447 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1122 00:38:40.159923  216447 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	W1122 00:38:37.453344  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:39.952701  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:44.827398  216447 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 4.667730898s
	I1122 00:38:46.377701  216447 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 6.218442351s
	W1122 00:38:42.452710  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:44.453047  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:48.161343  216447 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 8.002051247s
	I1122 00:38:48.182719  216447 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1122 00:38:48.199784  216447 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1122 00:38:48.214838  216447 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1122 00:38:48.215060  216447 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-540723 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1122 00:38:48.226808  216447 kubeadm.go:319] [bootstrap-token] Using token: 72kwgl.63h5iuu326tbwoyb
	I1122 00:38:48.229739  216447 out.go:252]   - Configuring RBAC rules ...
	I1122 00:38:48.229875  216447 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1122 00:38:48.235547  216447 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1122 00:38:48.251747  216447 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1122 00:38:48.256076  216447 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1122 00:38:48.260398  216447 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1122 00:38:48.264668  216447 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1122 00:38:48.568928  216447 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1122 00:38:49.012559  216447 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1122 00:38:49.570954  216447 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1122 00:38:49.572226  216447 kubeadm.go:319] 
	I1122 00:38:49.572299  216447 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1122 00:38:49.572305  216447 kubeadm.go:319] 
	I1122 00:38:49.572382  216447 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1122 00:38:49.572387  216447 kubeadm.go:319] 
	I1122 00:38:49.572411  216447 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1122 00:38:49.572470  216447 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1122 00:38:49.572521  216447 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1122 00:38:49.572525  216447 kubeadm.go:319] 
	I1122 00:38:49.572585  216447 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1122 00:38:49.572590  216447 kubeadm.go:319] 
	I1122 00:38:49.572637  216447 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1122 00:38:49.572640  216447 kubeadm.go:319] 
	I1122 00:38:49.572692  216447 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1122 00:38:49.572767  216447 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1122 00:38:49.572835  216447 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1122 00:38:49.572839  216447 kubeadm.go:319] 
	I1122 00:38:49.572924  216447 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1122 00:38:49.573001  216447 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1122 00:38:49.573006  216447 kubeadm.go:319] 
	I1122 00:38:49.573090  216447 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 72kwgl.63h5iuu326tbwoyb \
	I1122 00:38:49.573193  216447 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad26553e08ef3801627a7166e0bb20bf24427585c6187a46d63e60c79d4d84c \
	I1122 00:38:49.573214  216447 kubeadm.go:319] 	--control-plane 
	I1122 00:38:49.573218  216447 kubeadm.go:319] 
	I1122 00:38:49.573302  216447 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1122 00:38:49.573306  216447 kubeadm.go:319] 
	I1122 00:38:49.573389  216447 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 72kwgl.63h5iuu326tbwoyb \
	I1122 00:38:49.573491  216447 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:6ad26553e08ef3801627a7166e0bb20bf24427585c6187a46d63e60c79d4d84c 
	I1122 00:38:49.578352  216447 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I1122 00:38:49.578585  216447 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I1122 00:38:49.578692  216447 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1122 00:38:49.578712  216447 cni.go:84] Creating CNI manager for ""
	I1122 00:38:49.578731  216447 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:38:49.581781  216447 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1122 00:38:49.584672  216447 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1122 00:38:49.589124  216447 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1122 00:38:49.589145  216447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1122 00:38:49.606557  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1122 00:38:49.954312  216447 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1122 00:38:49.954454  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:49.954553  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-540723 minikube.k8s.io/updated_at=2025_11_22T00_38_49_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785 minikube.k8s.io/name=embed-certs-540723 minikube.k8s.io/primary=true
	I1122 00:38:50.220772  216447 ops.go:34] apiserver oom_adj: -16
	I1122 00:38:50.220893  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:50.721253  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:51.220969  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:51.721732  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W1122 00:38:46.952514  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:49.451824  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:51.452290  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:38:52.221112  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:52.721227  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:53.221292  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:53.721901  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:54.221173  216447 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1122 00:38:54.321511  216447 kubeadm.go:1114] duration metric: took 4.36711406s to wait for elevateKubeSystemPrivileges
	I1122 00:38:54.321544  216447 kubeadm.go:403] duration metric: took 22.955091646s to StartCluster
	I1122 00:38:54.321562  216447 settings.go:142] acquiring lock: {Name:mk5b79634916fd13f05f4c848ff3e8b07cafa39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:54.321628  216447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:38:54.322955  216447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:38:54.323217  216447 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:38:54.323307  216447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1122 00:38:54.323608  216447 config.go:182] Loaded profile config "embed-certs-540723": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:38:54.323653  216447 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:38:54.323715  216447 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-540723"
	I1122 00:38:54.323730  216447 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-540723"
	I1122 00:38:54.323751  216447 host.go:66] Checking if "embed-certs-540723" exists ...
	I1122 00:38:54.324237  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:54.324765  216447 addons.go:70] Setting default-storageclass=true in profile "embed-certs-540723"
	I1122 00:38:54.324787  216447 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-540723"
	I1122 00:38:54.325094  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:54.327622  216447 out.go:179] * Verifying Kubernetes components...
	I1122 00:38:54.330984  216447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:38:54.359128  216447 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:38:54.366691  216447 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:38:54.366717  216447 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:38:54.366780  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:54.388907  216447 addons.go:239] Setting addon default-storageclass=true in "embed-certs-540723"
	I1122 00:38:54.389018  216447 host.go:66] Checking if "embed-certs-540723" exists ...
	I1122 00:38:54.389597  216447 cli_runner.go:164] Run: docker container inspect embed-certs-540723 --format={{.State.Status}}
	I1122 00:38:54.421212  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:54.437710  216447 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:38:54.437735  216447 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:38:54.437795  216447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-540723
	I1122 00:38:54.475094  216447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/embed-certs-540723/id_rsa Username:docker}
	I1122 00:38:54.612038  216447 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1122 00:38:54.634516  216447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:38:54.638625  216447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:38:54.752274  216447 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:38:55.298115  216447 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1122 00:38:55.300587  216447 node_ready.go:35] waiting up to 6m0s for node "embed-certs-540723" to be "Ready" ...
	I1122 00:38:55.680642  216447 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.041982843s)
	I1122 00:38:55.693324  216447 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1122 00:38:55.696202  216447 addons.go:530] duration metric: took 1.372543284s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1122 00:38:55.802763  216447 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-540723" context rescaled to 1 replicas
	W1122 00:38:53.951990  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:56.452697  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:38:57.304244  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:38:59.805110  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:38:58.952063  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	W1122 00:39:00.952275  213043 node_ready.go:57] node "default-k8s-diff-port-080784" has "Ready":"False" status (will retry)
	I1122 00:39:01.452150  213043 node_ready.go:49] node "default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:01.452176  213043 node_ready.go:38] duration metric: took 40.003129289s for node "default-k8s-diff-port-080784" to be "Ready" ...
	I1122 00:39:01.452191  213043 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:39:01.452247  213043 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:39:01.465012  213043 api_server.go:72] duration metric: took 41.652970543s to wait for apiserver process to appear ...
	I1122 00:39:01.465037  213043 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:39:01.465057  213043 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8444/healthz ...
	I1122 00:39:01.473970  213043 api_server.go:279] https://192.168.85.2:8444/healthz returned 200:
	ok
	I1122 00:39:01.475088  213043 api_server.go:141] control plane version: v1.34.1
	I1122 00:39:01.475115  213043 api_server.go:131] duration metric: took 10.07016ms to wait for apiserver health ...
	I1122 00:39:01.475127  213043 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:39:01.478122  213043 system_pods.go:59] 8 kube-system pods found
	I1122 00:39:01.478163  213043 system_pods.go:61] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:01.478171  213043 system_pods.go:61] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:01.478176  213043 system_pods.go:61] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:01.478181  213043 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:01.478193  213043 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:01.478205  213043 system_pods.go:61] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:01.478209  213043 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:01.478215  213043 system_pods.go:61] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:39:01.478221  213043 system_pods.go:74] duration metric: took 3.088805ms to wait for pod list to return data ...
	I1122 00:39:01.478233  213043 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:39:01.480753  213043 default_sa.go:45] found service account: "default"
	I1122 00:39:01.480777  213043 default_sa.go:55] duration metric: took 2.537208ms for default service account to be created ...
	I1122 00:39:01.480787  213043 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:39:01.484034  213043 system_pods.go:86] 8 kube-system pods found
	I1122 00:39:01.484070  213043 system_pods.go:89] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:01.484077  213043 system_pods.go:89] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:01.484086  213043 system_pods.go:89] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:01.484092  213043 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:01.484097  213043 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:01.484101  213043 system_pods.go:89] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:01.484105  213043 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:01.484132  213043 system_pods.go:89] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:39:01.484158  213043 retry.go:31] will retry after 230.813177ms: missing components: kube-dns
	I1122 00:39:01.719021  213043 system_pods.go:86] 8 kube-system pods found
	I1122 00:39:01.719063  213043 system_pods.go:89] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:01.719076  213043 system_pods.go:89] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:01.719082  213043 system_pods.go:89] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:01.719088  213043 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:01.719092  213043 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:01.719101  213043 system_pods.go:89] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:01.719105  213043 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:01.719120  213043 system_pods.go:89] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:39:01.719141  213043 retry.go:31] will retry after 327.1869ms: missing components: kube-dns
	W1122 00:39:01.805167  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:04.304251  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:06.304378  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	I1122 00:39:02.051380  213043 system_pods.go:86] 8 kube-system pods found
	I1122 00:39:02.051419  213043 system_pods.go:89] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:02.051427  213043 system_pods.go:89] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:02.051434  213043 system_pods.go:89] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:02.051440  213043 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:02.051445  213043 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:02.051449  213043 system_pods.go:89] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:02.051453  213043 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:02.051459  213043 system_pods.go:89] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:39:02.051478  213043 retry.go:31] will retry after 373.645962ms: missing components: kube-dns
	I1122 00:39:02.429843  213043 system_pods.go:86] 8 kube-system pods found
	I1122 00:39:02.429883  213043 system_pods.go:89] "coredns-66bc5c9577-cg98c" [80a5ce0f-6a18-4c4a-a32b-d664baef9ec4] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:39:02.429891  213043 system_pods.go:89] "etcd-default-k8s-diff-port-080784" [95f7a4b1-361c-4a66-8b61-b0b495303508] Running
	I1122 00:39:02.429897  213043 system_pods.go:89] "kindnet-cgr2l" [0dd2f6cd-8657-48d2-940c-c4cd2e89d63d] Running
	I1122 00:39:02.429902  213043 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-080784" [35c0a21a-519f-4afc-9f5f-eca23e831e3c] Running
	I1122 00:39:02.429906  213043 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-080784" [3e5789be-304d-4372-88c3-fb0002a2c846] Running
	I1122 00:39:02.429911  213043 system_pods.go:89] "kube-proxy-l9z8d" [5d948362-27cd-47c6-8af3-a61fd3ef1c51] Running
	I1122 00:39:02.429915  213043 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-080784" [1c0016af-f6d1-489d-b615-7bcb32edd019] Running
	I1122 00:39:02.429919  213043 system_pods.go:89] "storage-provisioner" [c27df238-e4f6-41ab-84bf-86a694ffab65] Running
	I1122 00:39:02.429927  213043 system_pods.go:126] duration metric: took 949.133593ms to wait for k8s-apps to be running ...
	I1122 00:39:02.429939  213043 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:39:02.429997  213043 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:39:02.443263  213043 system_svc.go:56] duration metric: took 13.314939ms WaitForService to wait for kubelet
	I1122 00:39:02.443294  213043 kubeadm.go:587] duration metric: took 42.631253498s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:39:02.443312  213043 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:39:02.446431  213043 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:39:02.446465  213043 node_conditions.go:123] node cpu capacity is 2
	I1122 00:39:02.446478  213043 node_conditions.go:105] duration metric: took 3.161093ms to run NodePressure ...
	I1122 00:39:02.446492  213043 start.go:242] waiting for startup goroutines ...
	I1122 00:39:02.446499  213043 start.go:247] waiting for cluster config update ...
	I1122 00:39:02.446510  213043 start.go:256] writing updated cluster config ...
	I1122 00:39:02.446819  213043 ssh_runner.go:195] Run: rm -f paused
	I1122 00:39:02.450755  213043 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:39:02.454937  213043 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cg98c" in "kube-system" namespace to be "Ready" or be gone ...
	W1122 00:39:04.460325  213043 pod_ready.go:104] pod "coredns-66bc5c9577-cg98c" is not "Ready", error: <nil>
	W1122 00:39:06.461048  213043 pod_ready.go:104] pod "coredns-66bc5c9577-cg98c" is not "Ready", error: <nil>
	W1122 00:39:08.304811  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:10.804148  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:08.960506  213043 pod_ready.go:104] pod "coredns-66bc5c9577-cg98c" is not "Ready", error: <nil>
	W1122 00:39:11.460496  213043 pod_ready.go:104] pod "coredns-66bc5c9577-cg98c" is not "Ready", error: <nil>
	I1122 00:39:12.460460  213043 pod_ready.go:94] pod "coredns-66bc5c9577-cg98c" is "Ready"
	I1122 00:39:12.460490  213043 pod_ready.go:86] duration metric: took 10.005524324s for pod "coredns-66bc5c9577-cg98c" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.463266  213043 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.468635  213043 pod_ready.go:94] pod "etcd-default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:12.468662  213043 pod_ready.go:86] duration metric: took 5.367762ms for pod "etcd-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.471150  213043 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.476063  213043 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:12.476093  213043 pod_ready.go:86] duration metric: took 4.911599ms for pod "kube-apiserver-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.478325  213043 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.658209  213043 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:12.658235  213043 pod_ready.go:86] duration metric: took 179.881353ms for pod "kube-controller-manager-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:12.858817  213043 pod_ready.go:83] waiting for pod "kube-proxy-l9z8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:13.258062  213043 pod_ready.go:94] pod "kube-proxy-l9z8d" is "Ready"
	I1122 00:39:13.258088  213043 pod_ready.go:86] duration metric: took 399.246444ms for pod "kube-proxy-l9z8d" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:13.458314  213043 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:13.858666  213043 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-080784" is "Ready"
	I1122 00:39:13.858699  213043 pod_ready.go:86] duration metric: took 400.34227ms for pod "kube-scheduler-default-k8s-diff-port-080784" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:39:13.858714  213043 pod_ready.go:40] duration metric: took 11.407928369s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:39:13.931811  213043 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:39:13.936087  213043 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-080784" cluster and "default" namespace by default
	W1122 00:39:12.804205  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:14.804326  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:16.805094  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:19.304673  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	W1122 00:39:21.304729  216447 node_ready.go:57] node "embed-certs-540723" has "Ready":"False" status (will retry)
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2efaa57a3d019       1611cd07b61d5       9 seconds ago        Running             busybox                   0                   594b8e5974257       busybox                                                default
	0d64e81dc0ad8       138784d87c9c5       24 seconds ago       Running             coredns                   0                   07d41afbffbbf       coredns-66bc5c9577-cg98c                               kube-system
	3951649c708fd       ba04bb24b9575       24 seconds ago       Running             storage-provisioner       0                   5476027915aa3       storage-provisioner                                    kube-system
	252561cb6cab2       b1a8c6f707935       About a minute ago   Running             kindnet-cni               0                   4bfb4d82a2f10       kindnet-cgr2l                                          kube-system
	3b6f77ac2c3c3       05baa95f5142d       About a minute ago   Running             kube-proxy                0                   6a4366acebe4b       kube-proxy-l9z8d                                       kube-system
	d9ab3ff2e6b49       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   13b2c09911bdf       kube-scheduler-default-k8s-diff-port-080784            kube-system
	1d53549631ceb       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   ac68fc1346348       kube-apiserver-default-k8s-diff-port-080784            kube-system
	e6948214b5c72       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   6aef774c41c11       kube-controller-manager-default-k8s-diff-port-080784   kube-system
	1f283db038f66       a1894772a478e       About a minute ago   Running             etcd                      0                   27ee9c78fecad       etcd-default-k8s-diff-port-080784                      kube-system
	
	
	==> containerd <==
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.854880143Z" level=info msg="StartContainer for \"3951649c708fd5267d0ad4e41c0bfc6891129e251b06f5aa4d39d9c92c5aefd9\""
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.856388127Z" level=info msg="connecting to shim 3951649c708fd5267d0ad4e41c0bfc6891129e251b06f5aa4d39d9c92c5aefd9" address="unix:///run/containerd/s/f33cf1c7adbdc4e7f35afd586b0ffc559ff0fce9efc228258407bdc4102469b3" protocol=ttrpc version=3
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.857696641Z" level=info msg="CreateContainer within sandbox \"07d41afbffbbf64cbccdda513f865ef5a1d87bf97f3a13da6ae2dedc71063a50\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.875157376Z" level=info msg="Container 0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.889962666Z" level=info msg="CreateContainer within sandbox \"07d41afbffbbf64cbccdda513f865ef5a1d87bf97f3a13da6ae2dedc71063a50\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed\""
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.891168869Z" level=info msg="StartContainer for \"0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed\""
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.893186267Z" level=info msg="connecting to shim 0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed" address="unix:///run/containerd/s/ab8916ef6eecc15b7daf818516f8efae5f73a7b9ec0b75b83192d6e943822f50" protocol=ttrpc version=3
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.982250457Z" level=info msg="StartContainer for \"3951649c708fd5267d0ad4e41c0bfc6891129e251b06f5aa4d39d9c92c5aefd9\" returns successfully"
	Nov 22 00:39:01 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:01.983340056Z" level=info msg="StartContainer for \"0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed\" returns successfully"
	Nov 22 00:39:14 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:14.537580307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2004090a-bf01-4959-8a39-43712a0513ef,Namespace:default,Attempt:0,}"
	Nov 22 00:39:14 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:14.598371289Z" level=info msg="connecting to shim 594b8e597425723af373549357bf05fcab57bf8248f1ce7f9b1771bc1be1c3c0" address="unix:///run/containerd/s/a381171f79942286ac86de984d662cb9f01484f7d8fbf9f432477df719c8408e" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:39:14 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:14.659836898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:2004090a-bf01-4959-8a39-43712a0513ef,Namespace:default,Attempt:0,} returns sandbox id \"594b8e597425723af373549357bf05fcab57bf8248f1ce7f9b1771bc1be1c3c0\""
	Nov 22 00:39:14 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:14.662707887Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.616165605Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.617969289Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937185"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.620240483Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.623371430Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.624235073Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 1.961280527s"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.624281465Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.631002755Z" level=info msg="CreateContainer within sandbox \"594b8e597425723af373549357bf05fcab57bf8248f1ce7f9b1771bc1be1c3c0\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.645784749Z" level=info msg="Container 2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.658139412Z" level=info msg="CreateContainer within sandbox \"594b8e597425723af373549357bf05fcab57bf8248f1ce7f9b1771bc1be1c3c0\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d\""
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.659242303Z" level=info msg="StartContainer for \"2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d\""
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.660483165Z" level=info msg="connecting to shim 2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d" address="unix:///run/containerd/s/a381171f79942286ac86de984d662cb9f01484f7d8fbf9f432477df719c8408e" protocol=ttrpc version=3
	Nov 22 00:39:16 default-k8s-diff-port-080784 containerd[759]: time="2025-11-22T00:39:16.722622949Z" level=info msg="StartContainer for \"2efaa57a3d0190cd6de65e98a645999c92d75c38037a80d865e7f0ad1c376f9d\" returns successfully"
	
	
	==> coredns [0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39659 - 41661 "HINFO IN 1977687106285590801.1285743489587120048. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.012847856s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-080784
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-080784
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=default-k8s-diff-port-080784
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_38_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:38:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-080784
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:39:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:39:01 +0000   Sat, 22 Nov 2025 00:38:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:39:01 +0000   Sat, 22 Nov 2025 00:38:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:39:01 +0000   Sat, 22 Nov 2025 00:38:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:39:01 +0000   Sat, 22 Nov 2025 00:39:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-080784
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                9cd3571d-d1d2-40b1-b21c-06f427a0bd0e
	  Boot ID:                    4e86741a-5896-4eb6-97ce-70ea8beedc67
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-cg98c                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     67s
	  kube-system                 etcd-default-k8s-diff-port-080784                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         75s
	  kube-system                 kindnet-cgr2l                                           100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      67s
	  kube-system                 kube-apiserver-default-k8s-diff-port-080784             250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-080784    200m (10%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-proxy-l9z8d                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  kube-system                 kube-scheduler-default-k8s-diff-port-080784             100m (5%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         64s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 65s                kube-proxy       
	  Normal   NodeAllocatableEnforced  83s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 83s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  83s (x8 over 83s)  kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    83s (x8 over 83s)  kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     83s (x7 over 83s)  kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasSufficientPID
	  Normal   Starting                 83s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 73s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 73s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  72s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  72s                kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    72s                kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     72s                kubelet          Node default-k8s-diff-port-080784 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           68s                node-controller  Node default-k8s-diff-port-080784 event: Registered Node default-k8s-diff-port-080784 in Controller
	  Normal   NodeReady                25s                kubelet          Node default-k8s-diff-port-080784 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.017121] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498034] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037542] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808656] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.648915] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 23:58] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=00000000f9ea0775{9P.session} n=0000000035823f74
	[  +0.001177] FS-Cache: O-key=[10] '34323935353131333738'
	[  +0.000819] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000f9ea0775{9P.session} n=00000000dbfd8515
	[  +0.001154] FS-Cache: N-key=[10] '34323935353131333738'
	[Nov22 00:00] hrtimer: interrupt took 9958927 ns
	
	
	==> etcd [1f283db038f6611fb92be8c77623b177cb33d57f8a5645f03b6d191a2594fc2d] <==
	{"level":"warn","ts":"2025-11-22T00:38:08.447722Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.493267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.545299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.554590Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.591276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.631666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.649527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.690780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.720319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.749025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41964","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.776689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.810026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.828167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42014","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.868796Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.887690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.916649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.938555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.966056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:08.980419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.016845Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.042331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.072639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.094192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.122418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:09.289578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42232","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:39:26 up  1:21,  0 user,  load average: 2.54, 3.46, 2.89
	Linux default-k8s-diff-port-080784 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [252561cb6cab27d6a08d413150f7d821814252ec16e8d8b445220ccf8ed920c2] <==
	I1122 00:38:21.022804       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:38:21.023061       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:38:21.023178       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:38:21.023188       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:38:21.023204       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:38:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:38:21.225273       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:38:21.225390       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:38:21.225451       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:38:21.312520       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:38:51.225520       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:38:51.312907       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1122 00:38:51.313236       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:38:51.313491       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1122 00:38:52.726465       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:38:52.726523       1 metrics.go:72] Registering metrics
	I1122 00:38:52.726613       1 controller.go:711] "Syncing nftables rules"
	I1122 00:39:01.225186       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:39:01.225235       1 main.go:301] handling current node
	I1122 00:39:11.232132       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:39:11.232208       1 main.go:301] handling current node
	I1122 00:39:21.226548       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:39:21.226695       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1d53549631ceb733afa5892dc05607424c2b5352e3b607632d6fe7db11205546] <==
	I1122 00:38:11.050303       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1122 00:38:11.053370       1 controller.go:667] quota admission added evaluator for: namespaces
	I1122 00:38:11.121823       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:11.122202       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:38:11.139252       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:38:11.211003       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:11.211780       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:38:11.328609       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:38:11.387845       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:38:11.388048       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:38:12.535754       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:38:12.628899       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:38:12.737682       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:38:12.751242       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:38:12.753242       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:38:12.766508       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:38:13.450047       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:38:13.696935       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:38:13.749148       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:38:13.769987       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:38:18.756129       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:18.763071       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:19.051157       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:38:19.201072       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1122 00:39:23.365523       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8444->192.168.85.1:38330: use of closed network connection
	
	
	==> kube-controller-manager [e6948214b5c72c4b8f9a109a57b816f6a486408644295454dbb384df552ea8d7] <==
	I1122 00:38:18.537276       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1122 00:38:18.543731       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:38:18.544130       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:38:18.544231       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1122 00:38:18.547506       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:38:18.547532       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:38:18.548539       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:38:18.548607       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:38:18.548637       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:38:18.549045       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:38:18.554347       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1122 00:38:18.554512       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:38:18.554548       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1122 00:38:18.554586       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1122 00:38:18.554603       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1122 00:38:18.554608       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1122 00:38:18.554613       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1122 00:38:18.566794       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:38:18.566976       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:38:18.566987       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:38:18.566993       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:38:18.575664       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1122 00:38:18.601090       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-080784" podCIDRs=["10.244.0.0/24"]
	I1122 00:38:18.625955       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:39:03.528619       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3b6f77ac2c3c3d3ce2d9fb2efa01e84808ffcdc9a6c4657767c211ebd5bddbd1] <==
	I1122 00:38:21.135387       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:38:21.235754       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:38:21.337338       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:38:21.337404       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:38:21.337517       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:38:21.380814       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:38:21.380865       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:38:21.387384       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:38:21.387899       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:38:21.387927       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:38:21.392915       1 config.go:200] "Starting service config controller"
	I1122 00:38:21.392931       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:38:21.392947       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:38:21.392951       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:38:21.392962       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:38:21.392966       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:38:21.393793       1 config.go:309] "Starting node config controller"
	I1122 00:38:21.393802       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:38:21.393809       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:38:21.493623       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:38:21.493675       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:38:21.493716       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d9ab3ff2e6b49bf65ed2711f9dfb88ffa0b207339e178767951977bb5979d8bb] <==
	I1122 00:38:11.396699       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:38:11.400967       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:38:11.401175       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:38:11.433657       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:38:11.401193       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:38:11.405157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:38:11.432541       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:38:11.432889       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:38:11.432480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:38:11.451121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:38:11.452051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:38:11.451909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:38:11.452370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:38:11.452463       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:38:11.452711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:38:11.452914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:38:11.452971       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:38:11.453006       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:38:11.453054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:38:11.453109       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:38:11.451831       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:38:11.453154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:38:11.454459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:38:11.457114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1122 00:38:13.134203       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:38:15 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:15.100926    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-080784" podStartSLOduration=1.099001425 podStartE2EDuration="1.099001425s" podCreationTimestamp="2025-11-22 00:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:15.068950829 +0000 UTC m=+1.488545990" watchObservedRunningTime="2025-11-22 00:38:15.099001425 +0000 UTC m=+1.518596619"
	Nov 22 00:38:15 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:15.140076    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-080784" podStartSLOduration=1.140055734 podStartE2EDuration="1.140055734s" podCreationTimestamp="2025-11-22 00:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:15.136648212 +0000 UTC m=+1.556243398" watchObservedRunningTime="2025-11-22 00:38:15.140055734 +0000 UTC m=+1.559650903"
	Nov 22 00:38:15 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:15.143979    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-080784" podStartSLOduration=4.143960821 podStartE2EDuration="4.143960821s" podCreationTimestamp="2025-11-22 00:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:15.104350521 +0000 UTC m=+1.523945673" watchObservedRunningTime="2025-11-22 00:38:15.143960821 +0000 UTC m=+1.563555982"
	Nov 22 00:38:15 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:15.176459    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-080784" podStartSLOduration=1.176439857 podStartE2EDuration="1.176439857s" podCreationTimestamp="2025-11-22 00:38:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:15.157196327 +0000 UTC m=+1.576791512" watchObservedRunningTime="2025-11-22 00:38:15.176439857 +0000 UTC m=+1.596035010"
	Nov 22 00:38:18 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:18.638168    1479 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:38:18 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:18.643831    1479 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319601    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/0dd2f6cd-8657-48d2-940c-c4cd2e89d63d-cni-cfg\") pod \"kindnet-cgr2l\" (UID: \"0dd2f6cd-8657-48d2-940c-c4cd2e89d63d\") " pod="kube-system/kindnet-cgr2l"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319672    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq5pf\" (UniqueName: \"kubernetes.io/projected/0dd2f6cd-8657-48d2-940c-c4cd2e89d63d-kube-api-access-tq5pf\") pod \"kindnet-cgr2l\" (UID: \"0dd2f6cd-8657-48d2-940c-c4cd2e89d63d\") " pod="kube-system/kindnet-cgr2l"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319699    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5d948362-27cd-47c6-8af3-a61fd3ef1c51-xtables-lock\") pod \"kube-proxy-l9z8d\" (UID: \"5d948362-27cd-47c6-8af3-a61fd3ef1c51\") " pod="kube-system/kube-proxy-l9z8d"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319733    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0dd2f6cd-8657-48d2-940c-c4cd2e89d63d-lib-modules\") pod \"kindnet-cgr2l\" (UID: \"0dd2f6cd-8657-48d2-940c-c4cd2e89d63d\") " pod="kube-system/kindnet-cgr2l"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319752    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5d948362-27cd-47c6-8af3-a61fd3ef1c51-kube-proxy\") pod \"kube-proxy-l9z8d\" (UID: \"5d948362-27cd-47c6-8af3-a61fd3ef1c51\") " pod="kube-system/kube-proxy-l9z8d"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319769    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5d948362-27cd-47c6-8af3-a61fd3ef1c51-lib-modules\") pod \"kube-proxy-l9z8d\" (UID: \"5d948362-27cd-47c6-8af3-a61fd3ef1c51\") " pod="kube-system/kube-proxy-l9z8d"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319784    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqzv7\" (UniqueName: \"kubernetes.io/projected/5d948362-27cd-47c6-8af3-a61fd3ef1c51-kube-api-access-cqzv7\") pod \"kube-proxy-l9z8d\" (UID: \"5d948362-27cd-47c6-8af3-a61fd3ef1c51\") " pod="kube-system/kube-proxy-l9z8d"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.319846    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0dd2f6cd-8657-48d2-940c-c4cd2e89d63d-xtables-lock\") pod \"kindnet-cgr2l\" (UID: \"0dd2f6cd-8657-48d2-940c-c4cd2e89d63d\") " pod="kube-system/kindnet-cgr2l"
	Nov 22 00:38:19 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:19.482457    1479 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:38:21 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:21.758012    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l9z8d" podStartSLOduration=2.757993228 podStartE2EDuration="2.757993228s" podCreationTimestamp="2025-11-22 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:21.142634346 +0000 UTC m=+7.562229498" watchObservedRunningTime="2025-11-22 00:38:21.757993228 +0000 UTC m=+8.177588381"
	Nov 22 00:38:22 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:38:22.488071    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-cgr2l" podStartSLOduration=3.4880490699999998 podStartE2EDuration="3.48804907s" podCreationTimestamp="2025-11-22 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:22.349550108 +0000 UTC m=+8.769145261" watchObservedRunningTime="2025-11-22 00:38:22.48804907 +0000 UTC m=+8.907644223"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.331307    1479 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.433827    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80a5ce0f-6a18-4c4a-a32b-d664baef9ec4-config-volume\") pod \"coredns-66bc5c9577-cg98c\" (UID: \"80a5ce0f-6a18-4c4a-a32b-d664baef9ec4\") " pod="kube-system/coredns-66bc5c9577-cg98c"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.433882    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vrkkn\" (UniqueName: \"kubernetes.io/projected/80a5ce0f-6a18-4c4a-a32b-d664baef9ec4-kube-api-access-vrkkn\") pod \"coredns-66bc5c9577-cg98c\" (UID: \"80a5ce0f-6a18-4c4a-a32b-d664baef9ec4\") " pod="kube-system/coredns-66bc5c9577-cg98c"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.433908    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-564wj\" (UniqueName: \"kubernetes.io/projected/c27df238-e4f6-41ab-84bf-86a694ffab65-kube-api-access-564wj\") pod \"storage-provisioner\" (UID: \"c27df238-e4f6-41ab-84bf-86a694ffab65\") " pod="kube-system/storage-provisioner"
	Nov 22 00:39:01 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:01.433933    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c27df238-e4f6-41ab-84bf-86a694ffab65-tmp\") pod \"storage-provisioner\" (UID: \"c27df238-e4f6-41ab-84bf-86a694ffab65\") " pod="kube-system/storage-provisioner"
	Nov 22 00:39:02 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:02.237527    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cg98c" podStartSLOduration=43.237509057 podStartE2EDuration="43.237509057s" podCreationTimestamp="2025-11-22 00:38:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:39:02.237128539 +0000 UTC m=+48.656723708" watchObservedRunningTime="2025-11-22 00:39:02.237509057 +0000 UTC m=+48.657104218"
	Nov 22 00:39:12 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:12.250559    1479 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=50.250541573 podStartE2EDuration="50.250541573s" podCreationTimestamp="2025-11-22 00:38:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:39:02.257014989 +0000 UTC m=+48.676610199" watchObservedRunningTime="2025-11-22 00:39:12.250541573 +0000 UTC m=+58.670136726"
	Nov 22 00:39:14 default-k8s-diff-port-080784 kubelet[1479]: I1122 00:39:14.316553    1479 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rxhm\" (UniqueName: \"kubernetes.io/projected/2004090a-bf01-4959-8a39-43712a0513ef-kube-api-access-5rxhm\") pod \"busybox\" (UID: \"2004090a-bf01-4959-8a39-43712a0513ef\") " pod="default/busybox"
	
	
	==> storage-provisioner [3951649c708fd5267d0ad4e41c0bfc6891129e251b06f5aa4d39d9c92c5aefd9] <==
	I1122 00:39:02.181639       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-080784_279acefc-3577-4644-802b-e6b20a9acf49!
	W1122 00:39:04.095759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:04.100802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:06.104200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:06.108913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:08.112457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:08.119156       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:10.124715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:10.129308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:12.133121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:12.141425       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:14.147838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:14.166043       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:16.168717       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:16.175123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:18.179382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:18.186207       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:20.188856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:20.193409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:22.197556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:22.202506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:24.206408       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:24.212590       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:26.216322       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:26.228260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-080784 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (13.74s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (14.64s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-540723 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [17360a56-547d-4ae3-8398-71b0138ab6da] Pending
helpers_test.go:352: "busybox" [17360a56-547d-4ae3-8398-71b0138ab6da] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [17360a56-547d-4ae3-8398-71b0138ab6da] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003939555s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-540723 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-540723
helpers_test.go:243: (dbg) docker inspect embed-certs-540723:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac",
	        "Created": "2025-11-22T00:38:22.582161337Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 217302,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:38:22.655984161Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac/hosts",
	        "LogPath": "/var/lib/docker/containers/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac-json.log",
	        "Name": "/embed-certs-540723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-540723:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-540723",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac",
	                "LowerDir": "/var/lib/docker/overlay2/3a46aba3d3fa36d1688f298eef2435a3cea67bbb3c511b7ec99146b0e3c0a3c8-init/diff:/var/lib/docker/overlay2/7cce95e9587a813ce5f3ee5f28c6de3b78ed608010774b6d981aecaad739a571/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a46aba3d3fa36d1688f298eef2435a3cea67bbb3c511b7ec99146b0e3c0a3c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a46aba3d3fa36d1688f298eef2435a3cea67bbb3c511b7ec99146b0e3c0a3c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a46aba3d3fa36d1688f298eef2435a3cea67bbb3c511b7ec99146b0e3c0a3c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-540723",
	                "Source": "/var/lib/docker/volumes/embed-certs-540723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-540723",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-540723",
	                "name.minikube.sigs.k8s.io": "embed-certs-540723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5961479ef86f5c6241ccda80619f0ac95b74f2650418a80b341c55988e81a31",
	            "SandboxKey": "/var/run/docker/netns/d5961479ef86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-540723": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:ff:0c:72:e4:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a14dbc80b5256e92aed3d52f6c0493401acc94d166367abd5c8623c0558292e8",
	                    "EndpointID": "35c3956c883dc892a1121dc487c4dff87e2ba899692680d0ad4acc66f5840b52",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-540723",
	                        "2a8e7036c095"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-540723 -n embed-certs-540723
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-540723 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-540723 logs -n 25: (1.634943707s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ force-systemd-env-115975 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p force-systemd-env-115975                                                                                                                                                                                                                         │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-options-089440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ cert-options-089440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ -p cert-options-089440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ delete  │ -p cert-options-089440                                                                                                                                                                                                                              │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-187160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ stop    │ -p old-k8s-version-187160 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-187160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:37 UTC │
	│ image   │ old-k8s-version-187160 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ pause   │ -p old-k8s-version-187160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ unpause │ -p old-k8s-version-187160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ delete  │ -p old-k8s-version-187160                                                                                                                                                                                                                           │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ delete  │ -p old-k8s-version-187160                                                                                                                                                                                                                           │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ start   │ -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:39 UTC │
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:38 UTC │
	│ delete  │ -p cert-expiration-285797                                                                                                                                                                                                                           │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:38 UTC │
	│ start   │ -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-080784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ stop    │ -p default-k8s-diff-port-080784 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-080784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ start   │ -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:39:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:39:41.369390  221580 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:39:41.369501  221580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:39:41.369512  221580 out.go:374] Setting ErrFile to fd 2...
	I1122 00:39:41.369517  221580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:39:41.369776  221580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:39:41.370139  221580 out.go:368] Setting JSON to false
	I1122 00:39:41.371056  221580 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4919,"bootTime":1763767063,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1122 00:39:41.371144  221580 start.go:143] virtualization:  
	I1122 00:39:41.372976  221580 out.go:179] * [default-k8s-diff-port-080784] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:39:41.374116  221580 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:39:41.374254  221580 notify.go:221] Checking for updates...
	I1122 00:39:41.377451  221580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:39:41.379340  221580 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:39:41.380446  221580 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1122 00:39:41.381542  221580 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:39:41.382885  221580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:39:41.384648  221580 config.go:182] Loaded profile config "default-k8s-diff-port-080784": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:39:41.385243  221580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:39:41.417144  221580 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:39:41.417258  221580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:39:41.480047  221580 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:39:41.470230442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:39:41.480161  221580 docker.go:319] overlay module found
	I1122 00:39:41.481587  221580 out.go:179] * Using the docker driver based on existing profile
	I1122 00:39:41.482737  221580 start.go:309] selected driver: docker
	I1122 00:39:41.482751  221580 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-080784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-080784 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:39:41.482867  221580 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:39:41.483806  221580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:39:41.539880  221580 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:39:41.530966471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:39:41.540215  221580 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:39:41.540249  221580 cni.go:84] Creating CNI manager for ""
	I1122 00:39:41.540307  221580 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:39:41.540401  221580 start.go:353] cluster config:
	{Name:default-k8s-diff-port-080784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-080784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:39:41.541946  221580 out.go:179] * Starting "default-k8s-diff-port-080784" primary control-plane node in "default-k8s-diff-port-080784" cluster
	I1122 00:39:41.543130  221580 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:39:41.544293  221580 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:39:41.545318  221580 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:39:41.545364  221580 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1122 00:39:41.545376  221580 cache.go:65] Caching tarball of preloaded images
	I1122 00:39:41.545402  221580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:39:41.545450  221580 preload.go:238] Found /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1122 00:39:41.545461  221580 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:39:41.545579  221580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/config.json ...
	I1122 00:39:41.564830  221580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:39:41.564857  221580 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:39:41.564877  221580 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:39:41.564901  221580 start.go:360] acquireMachinesLock for default-k8s-diff-port-080784: {Name:mkf1922f37d9de5f76466cb066f0a541ae9dceb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:39:41.564973  221580 start.go:364] duration metric: took 47.656µs to acquireMachinesLock for "default-k8s-diff-port-080784"
	I1122 00:39:41.564996  221580 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:39:41.565007  221580 fix.go:54] fixHost starting: 
	I1122 00:39:41.565283  221580 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:39:41.582199  221580 fix.go:112] recreateIfNeeded on default-k8s-diff-port-080784: state=Stopped err=<nil>
	W1122 00:39:41.582228  221580 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:39:41.583677  221580 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-080784" ...
	I1122 00:39:41.583770  221580 cli_runner.go:164] Run: docker start default-k8s-diff-port-080784
	I1122 00:39:41.854244  221580 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:39:41.875433  221580 kic.go:430] container "default-k8s-diff-port-080784" state is running.
	I1122 00:39:41.878528  221580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-080784
	I1122 00:39:41.905178  221580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/config.json ...
	I1122 00:39:41.905412  221580 machine.go:94] provisionDockerMachine start ...
	I1122 00:39:41.905480  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:41.925565  221580 main.go:143] libmachine: Using SSH client type: native
	I1122 00:39:41.925946  221580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:39:41.925959  221580 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:39:41.926965  221580 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44478->127.0.0.1:33073: read: connection reset by peer
	I1122 00:39:45.132864  221580 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-080784
	
	I1122 00:39:45.132897  221580 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-080784"
	I1122 00:39:45.133002  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:45.155213  221580 main.go:143] libmachine: Using SSH client type: native
	I1122 00:39:45.155550  221580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:39:45.155591  221580 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-080784 && echo "default-k8s-diff-port-080784" | sudo tee /etc/hostname
	I1122 00:39:45.329836  221580 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-080784
	
	I1122 00:39:45.330113  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:45.351056  221580 main.go:143] libmachine: Using SSH client type: native
	I1122 00:39:45.351421  221580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:39:45.351445  221580 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-080784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-080784/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-080784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:39:45.497595  221580 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:39:45.497688  221580 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-2332/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-2332/.minikube}
	I1122 00:39:45.497739  221580 ubuntu.go:190] setting up certificates
	I1122 00:39:45.497765  221580 provision.go:84] configureAuth start
	I1122 00:39:45.497850  221580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-080784
	I1122 00:39:45.516474  221580 provision.go:143] copyHostCerts
	I1122 00:39:45.516544  221580 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem, removing ...
	I1122 00:39:45.516561  221580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem
	I1122 00:39:45.516637  221580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem (1078 bytes)
	I1122 00:39:45.516741  221580 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem, removing ...
	I1122 00:39:45.516759  221580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem
	I1122 00:39:45.516787  221580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem (1123 bytes)
	I1122 00:39:45.516844  221580 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem, removing ...
	I1122 00:39:45.516853  221580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem
	I1122 00:39:45.516883  221580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem (1675 bytes)
	I1122 00:39:45.516934  221580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-080784 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-080784 localhost minikube]
	I1122 00:39:46.115480  221580 provision.go:177] copyRemoteCerts
	I1122 00:39:46.115547  221580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:39:46.115603  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:46.133323  221580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:39:46.235187  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1122 00:39:46.252569  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:39:46.271475  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:39:46.289707  221580 provision.go:87] duration metric: took 791.907589ms to configureAuth
	I1122 00:39:46.289737  221580 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:39:46.289938  221580 config.go:182] Loaded profile config "default-k8s-diff-port-080784": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:39:46.289950  221580 machine.go:97] duration metric: took 4.384521801s to provisionDockerMachine
	I1122 00:39:46.289959  221580 start.go:293] postStartSetup for "default-k8s-diff-port-080784" (driver="docker")
	I1122 00:39:46.289973  221580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:39:46.290029  221580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:39:46.290075  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:46.307439  221580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	417642f210950       1611cd07b61d5       6 seconds ago        Running             busybox                   0                   d0df139c45090       busybox                                      default
	ba940c4a9dbef       ba04bb24b9575       12 seconds ago       Running             storage-provisioner       0                   95b575225df8c       storage-provisioner                          kube-system
	5a7b746b45e8d       138784d87c9c5       12 seconds ago       Running             coredns                   0                   5451ff5098f78       coredns-66bc5c9577-kbk5c                     kube-system
	e96eec53caa12       b1a8c6f707935       53 seconds ago       Running             kindnet-cni               0                   d7abdafee62af       kindnet-bls8b                                kube-system
	b17cc18fbe3e8       05baa95f5142d       53 seconds ago       Running             kube-proxy                0                   5c0789192c67b       kube-proxy-vgr8w                             kube-system
	09d0d29b2b446       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   4f1cd6769a291       kube-controller-manager-embed-certs-540723   kube-system
	d4ed557ae39e2       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   07cbe223c77f2       kube-apiserver-embed-certs-540723            kube-system
	8e2888c32825b       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   482168df46067       kube-scheduler-embed-certs-540723            kube-system
	5b278fa6de142       a1894772a478e       About a minute ago   Running             etcd                      0                   c90bb953cbcad       etcd-embed-certs-540723                      kube-system
	
	
	==> containerd <==
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.206817062Z" level=info msg="Container ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.210586557Z" level=info msg="CreateContainer within sandbox \"5451ff5098f7819cc5e18faf2c2a9ed5c4573a10ad9ddffa889412647bd68ff3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f\""
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.211454992Z" level=info msg="StartContainer for \"5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f\""
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.212784839Z" level=info msg="connecting to shim 5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f" address="unix:///run/containerd/s/16ea7e1cb9fee3ccba69bad3d6662daa7b8c9e334a0d2718bd814f715c339c13" protocol=ttrpc version=3
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.221172523Z" level=info msg="CreateContainer within sandbox \"95b575225df8c54a343f4540072d2aef76f3467187a1d32156243e1d6acac8af\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2\""
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.223886594Z" level=info msg="StartContainer for \"ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2\""
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.225704391Z" level=info msg="connecting to shim ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2" address="unix:///run/containerd/s/5a07a40a8ec11c485ebf638844b5e350d89102ad034c4ccf68e1e904b846b7c3" protocol=ttrpc version=3
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.300402358Z" level=info msg="StartContainer for \"ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2\" returns successfully"
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.309841936Z" level=info msg="StartContainer for \"5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f\" returns successfully"
	Nov 22 00:39:39 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:39.619718047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17360a56-547d-4ae3-8398-71b0138ab6da,Namespace:default,Attempt:0,}"
	Nov 22 00:39:39 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:39.658059434Z" level=info msg="connecting to shim d0df139c45090491664b7043e2723b014525cb7f88488dcb0f54203446309bad" address="unix:///run/containerd/s/c1209c15a2dce81dcbe0e3c34109f8b81ba3dd2b120fbca121035110dcedd9ed" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:39:39 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:39.711711945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17360a56-547d-4ae3-8398-71b0138ab6da,Namespace:default,Attempt:0,} returns sandbox id \"d0df139c45090491664b7043e2723b014525cb7f88488dcb0f54203446309bad\""
	Nov 22 00:39:39 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:39.715754354Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.954035377Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.955550926Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937187"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.957250798Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.960549673Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.961366964Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.245565094s"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.962364606Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.972284068Z" level=info msg="CreateContainer within sandbox \"d0df139c45090491664b7043e2723b014525cb7f88488dcb0f54203446309bad\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.989535445Z" level=info msg="Container 417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.997242627Z" level=info msg="CreateContainer within sandbox \"d0df139c45090491664b7043e2723b014525cb7f88488dcb0f54203446309bad\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79\""
	Nov 22 00:39:42 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.999086451Z" level=info msg="StartContainer for \"417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79\""
	Nov 22 00:39:42 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:42.000528554Z" level=info msg="connecting to shim 417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79" address="unix:///run/containerd/s/c1209c15a2dce81dcbe0e3c34109f8b81ba3dd2b120fbca121035110dcedd9ed" protocol=ttrpc version=3
	Nov 22 00:39:42 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:42.122723165Z" level=info msg="StartContainer for \"417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79\" returns successfully"
	
	
	==> coredns [5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45472 - 15231 "HINFO IN 3645877841633436492.8105226943450054855. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013994545s
	
	
	==> describe nodes <==
	Name:               embed-certs-540723
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-540723
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-540723
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_38_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:38:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-540723
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:39:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:39:35 +0000   Sat, 22 Nov 2025 00:38:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:39:35 +0000   Sat, 22 Nov 2025 00:38:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:39:35 +0000   Sat, 22 Nov 2025 00:38:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:39:35 +0000   Sat, 22 Nov 2025 00:39:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-540723
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                0ba07f99-f6b3-4765-89ca-c97702e7d0a8
	  Boot ID:                    4e86741a-5896-4eb6-97ce-70ea8beedc67
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-kbk5c                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     55s
	  kube-system                 etcd-embed-certs-540723                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         60s
	  kube-system                 kindnet-bls8b                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      55s
	  kube-system                 kube-apiserver-embed-certs-540723             250m (12%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-controller-manager-embed-certs-540723    200m (10%)    0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 kube-proxy-vgr8w                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kube-system                 kube-scheduler-embed-certs-540723             100m (5%)     0 (0%)      0 (0%)           0 (0%)         60s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 53s   kube-proxy       
	  Normal   Starting                 61s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 61s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  60s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  60s   kubelet          Node embed-certs-540723 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    60s   kubelet          Node embed-certs-540723 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     60s   kubelet          Node embed-certs-540723 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           57s   node-controller  Node embed-certs-540723 event: Registered Node embed-certs-540723 in Controller
	  Normal   NodeReady                14s   kubelet          Node embed-certs-540723 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.017121] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498034] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037542] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808656] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.648915] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 23:58] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=00000000f9ea0775{9P.session} n=0000000035823f74
	[  +0.001177] FS-Cache: O-key=[10] '34323935353131333738'
	[  +0.000819] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000f9ea0775{9P.session} n=00000000dbfd8515
	[  +0.001154] FS-Cache: N-key=[10] '34323935353131333738'
	[Nov22 00:00] hrtimer: interrupt took 9958927 ns
	
	
	==> etcd [5b278fa6de1425390a0f2ab37ad3056a4ef9b8c25d34d2469e67e8d09035920b] <==
	{"level":"warn","ts":"2025-11-22T00:38:44.127513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.170279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.197824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.241956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.262434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.296075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.322557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.357696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.391895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.412723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.465457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.489436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.521165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.559742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.577985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.611394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.647297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.675765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.707278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.749273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.796708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.825782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.838903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.861372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.970472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52632","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:39:49 up  1:22,  0 user,  load average: 2.77, 3.43, 2.89
	Linux embed-certs-540723 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e96eec53caa12f2789abf288897d09a32050e76fa006aacd677ab995420f0510] <==
	I1122 00:38:55.426340       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:38:55.426639       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:38:55.426761       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:38:55.426780       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:38:55.426804       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:38:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:38:55.625448       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:38:55.625475       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:38:55.625484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:38:55.625612       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:39:25.625983       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:39:25.626086       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:39:25.626164       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:39:25.626240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:39:27.125606       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:39:27.125862       1 metrics.go:72] Registering metrics
	I1122 00:39:27.126034       1 controller.go:711] "Syncing nftables rules"
	I1122 00:39:35.624855       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:39:35.624913       1 main.go:301] handling current node
	I1122 00:39:45.623806       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:39:45.623923       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d4ed557ae39e2cd10f702f86bc509e03169955fb37eaef6049b2e395f4d794cf] <==
	I1122 00:38:46.242291       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:38:46.253247       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:38:46.255623       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1122 00:38:46.276027       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:46.276453       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:38:46.300545       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:46.301687       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:38:46.815770       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:38:46.822917       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:38:46.822948       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:38:47.636164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:38:47.697858       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:38:47.818209       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:38:47.826106       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1122 00:38:47.827379       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:38:47.832958       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:38:47.906935       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:38:48.979089       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:38:49.011513       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:38:49.024120       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:38:53.740834       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:53.759037       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:53.905787       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:38:54.008989       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1122 00:39:47.510862       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37434: use of closed network connection
	
	
	==> kube-controller-manager [09d0d29b2b4463ef0668b6e7a3bbcefa2bc1324d092277467c2c17aa98c89659] <==
	I1122 00:38:52.934439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:38:52.941831       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:38:52.947825       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:38:52.948117       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:38:52.948181       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:38:52.948189       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:38:52.950364       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:38:52.952994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:38:52.953515       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:38:52.953531       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:38:52.954402       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:38:52.954533       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-540723"
	I1122 00:38:52.953580       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:38:52.953645       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:38:52.953654       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:38:52.953675       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:38:52.953684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:38:52.954609       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:38:52.953632       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:38:52.953612       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:38:52.958914       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:38:52.967829       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:38:52.974190       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:38:52.976532       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:39:38.001337       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b17cc18fbe3e8015a538304c36e2e61a083b18d39873526539a1044c5af14384] <==
	I1122 00:38:55.096260       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:38:55.216607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:38:55.324400       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:38:55.324439       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:38:55.324508       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:38:55.414647       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:38:55.414798       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:38:55.432300       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:38:55.436082       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:38:55.436118       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:38:55.437577       1 config.go:200] "Starting service config controller"
	I1122 00:38:55.437603       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:38:55.437619       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:38:55.437624       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:38:55.437635       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:38:55.437643       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:38:55.438797       1 config.go:309] "Starting node config controller"
	I1122 00:38:55.438812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:38:55.438820       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:38:55.538423       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:38:55.538465       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:38:55.538520       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8e2888c32825b3194c1ac5f65176e35c9019823beec79329c9c82e04e463c2c9] <==
	I1122 00:38:46.368490       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:38:46.376898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:38:46.378055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:38:46.384023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:38:46.384804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:38:46.387221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:38:46.387337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:38:46.387685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:38:46.387834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:38:46.387896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:38:46.387964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:38:46.388021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:38:46.388111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:38:46.388394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:38:46.388596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:38:46.388990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:38:46.389284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:38:46.389575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:38:46.389791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:38:46.390031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:38:47.212709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:38:47.225057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:38:47.267881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:38:47.310781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1122 00:38:47.867909       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:38:49 embed-certs-540723 kubelet[1473]: I1122 00:38:49.981160    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-540723" podStartSLOduration=0.981152844 podStartE2EDuration="981.152844ms" podCreationTimestamp="2025-11-22 00:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:49.940521707 +0000 UTC m=+1.165393005" watchObservedRunningTime="2025-11-22 00:38:49.981152844 +0000 UTC m=+1.206024108"
	Nov 22 00:38:50 embed-certs-540723 kubelet[1473]: I1122 00:38:50.013659    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-540723" podStartSLOduration=1.013636053 podStartE2EDuration="1.013636053s" podCreationTimestamp="2025-11-22 00:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:49.994899928 +0000 UTC m=+1.219771193" watchObservedRunningTime="2025-11-22 00:38:50.013636053 +0000 UTC m=+1.238507326"
	Nov 22 00:38:50 embed-certs-540723 kubelet[1473]: I1122 00:38:50.063372    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-540723" podStartSLOduration=1.063352188 podStartE2EDuration="1.063352188s" podCreationTimestamp="2025-11-22 00:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:50.014024087 +0000 UTC m=+1.238895368" watchObservedRunningTime="2025-11-22 00:38:50.063352188 +0000 UTC m=+1.288223461"
	Nov 22 00:38:52 embed-certs-540723 kubelet[1473]: I1122 00:38:52.934556    1473 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:38:52 embed-certs-540723 kubelet[1473]: I1122 00:38:52.935683    1473 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235796    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87297594-e2ec-4d97-af64-37ac318d3bba-lib-modules\") pod \"kindnet-bls8b\" (UID: \"87297594-e2ec-4d97-af64-37ac318d3bba\") " pod="kube-system/kindnet-bls8b"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235850    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksplb\" (UniqueName: \"kubernetes.io/projected/87297594-e2ec-4d97-af64-37ac318d3bba-kube-api-access-ksplb\") pod \"kindnet-bls8b\" (UID: \"87297594-e2ec-4d97-af64-37ac318d3bba\") " pod="kube-system/kindnet-bls8b"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235874    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fae6b664-123d-4c6b-87fe-a48172bb5ec2-kube-proxy\") pod \"kube-proxy-vgr8w\" (UID: \"fae6b664-123d-4c6b-87fe-a48172bb5ec2\") " pod="kube-system/kube-proxy-vgr8w"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235897    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fae6b664-123d-4c6b-87fe-a48172bb5ec2-xtables-lock\") pod \"kube-proxy-vgr8w\" (UID: \"fae6b664-123d-4c6b-87fe-a48172bb5ec2\") " pod="kube-system/kube-proxy-vgr8w"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235916    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fae6b664-123d-4c6b-87fe-a48172bb5ec2-lib-modules\") pod \"kube-proxy-vgr8w\" (UID: \"fae6b664-123d-4c6b-87fe-a48172bb5ec2\") " pod="kube-system/kube-proxy-vgr8w"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235938    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87297594-e2ec-4d97-af64-37ac318d3bba-xtables-lock\") pod \"kindnet-bls8b\" (UID: \"87297594-e2ec-4d97-af64-37ac318d3bba\") " pod="kube-system/kindnet-bls8b"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235953    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xcr\" (UniqueName: \"kubernetes.io/projected/fae6b664-123d-4c6b-87fe-a48172bb5ec2-kube-api-access-44xcr\") pod \"kube-proxy-vgr8w\" (UID: \"fae6b664-123d-4c6b-87fe-a48172bb5ec2\") " pod="kube-system/kube-proxy-vgr8w"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235974    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/87297594-e2ec-4d97-af64-37ac318d3bba-cni-cfg\") pod \"kindnet-bls8b\" (UID: \"87297594-e2ec-4d97-af64-37ac318d3bba\") " pod="kube-system/kindnet-bls8b"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.361323    1473 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:38:56 embed-certs-540723 kubelet[1473]: I1122 00:38:56.154989    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bls8b" podStartSLOduration=2.154968311 podStartE2EDuration="2.154968311s" podCreationTimestamp="2025-11-22 00:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:56.139915158 +0000 UTC m=+7.364786521" watchObservedRunningTime="2025-11-22 00:38:56.154968311 +0000 UTC m=+7.379839584"
	Nov 22 00:38:57 embed-certs-540723 kubelet[1473]: I1122 00:38:57.578497    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vgr8w" podStartSLOduration=3.57847757 podStartE2EDuration="3.57847757s" podCreationTimestamp="2025-11-22 00:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:56.157012696 +0000 UTC m=+7.381883985" watchObservedRunningTime="2025-11-22 00:38:57.57847757 +0000 UTC m=+8.803348835"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.714405    1473 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.775021    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a437499-56df-4fce-a457-cf615ef0abb8-config-volume\") pod \"coredns-66bc5c9577-kbk5c\" (UID: \"7a437499-56df-4fce-a457-cf615ef0abb8\") " pod="kube-system/coredns-66bc5c9577-kbk5c"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.775071    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/770a06bc-52a0-4be2-88ac-a35e62e96a5b-tmp\") pod \"storage-provisioner\" (UID: \"770a06bc-52a0-4be2-88ac-a35e62e96a5b\") " pod="kube-system/storage-provisioner"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.775100    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdk7d\" (UniqueName: \"kubernetes.io/projected/7a437499-56df-4fce-a457-cf615ef0abb8-kube-api-access-qdk7d\") pod \"coredns-66bc5c9577-kbk5c\" (UID: \"7a437499-56df-4fce-a457-cf615ef0abb8\") " pod="kube-system/coredns-66bc5c9577-kbk5c"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.775122    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6hf\" (UniqueName: \"kubernetes.io/projected/770a06bc-52a0-4be2-88ac-a35e62e96a5b-kube-api-access-tc6hf\") pod \"storage-provisioner\" (UID: \"770a06bc-52a0-4be2-88ac-a35e62e96a5b\") " pod="kube-system/storage-provisioner"
	Nov 22 00:39:37 embed-certs-540723 kubelet[1473]: I1122 00:39:37.263920    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kbk5c" podStartSLOduration=43.263891139 podStartE2EDuration="43.263891139s" podCreationTimestamp="2025-11-22 00:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:39:37.239250985 +0000 UTC m=+48.464122299" watchObservedRunningTime="2025-11-22 00:39:37.263891139 +0000 UTC m=+48.488762412"
	Nov 22 00:39:37 embed-certs-540723 kubelet[1473]: I1122 00:39:37.278256    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.278236113 podStartE2EDuration="42.278236113s" podCreationTimestamp="2025-11-22 00:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:39:37.265696472 +0000 UTC m=+48.490567754" watchObservedRunningTime="2025-11-22 00:39:37.278236113 +0000 UTC m=+48.503107378"
	Nov 22 00:39:39 embed-certs-540723 kubelet[1473]: I1122 00:39:39.406484    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdblb\" (UniqueName: \"kubernetes.io/projected/17360a56-547d-4ae3-8398-71b0138ab6da-kube-api-access-bdblb\") pod \"busybox\" (UID: \"17360a56-547d-4ae3-8398-71b0138ab6da\") " pod="default/busybox"
	Nov 22 00:39:42 embed-certs-540723 kubelet[1473]: I1122 00:39:42.264423    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.014626334 podStartE2EDuration="3.264406278s" podCreationTimestamp="2025-11-22 00:39:39 +0000 UTC" firstStartedPulling="2025-11-22 00:39:39.713542304 +0000 UTC m=+50.938413569" lastFinishedPulling="2025-11-22 00:39:41.963322248 +0000 UTC m=+53.188193513" observedRunningTime="2025-11-22 00:39:42.263899358 +0000 UTC m=+53.488770623" watchObservedRunningTime="2025-11-22 00:39:42.264406278 +0000 UTC m=+53.489277542"
	
	
	==> storage-provisioner [ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2] <==
	I1122 00:39:36.326617       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:39:36.356140       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:39:36.357387       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:39:36.360189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:36.366862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:39:36.367295       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:39:36.367655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-540723_306e2b8d-7a0d-4086-a032-71a1737ff414!
	I1122 00:39:36.367917       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7cee4bc9-5e9f-4de6-955b-19e733e02539", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-540723_306e2b8d-7a0d-4086-a032-71a1737ff414 became leader
	W1122 00:39:36.370740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:36.376434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:39:36.468784       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-540723_306e2b8d-7a0d-4086-a032-71a1737ff414!
	W1122 00:39:38.379813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:38.385005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:40.388948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:40.399057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:42.408002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:42.416852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:44.420772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:44.425561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:46.429338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:46.435165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:48.438195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:48.453200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-540723 -n embed-certs-540723
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-540723 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-540723
helpers_test.go:243: (dbg) docker inspect embed-certs-540723:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac",
	        "Created": "2025-11-22T00:38:22.582161337Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 217302,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:38:22.655984161Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac/hostname",
	        "HostsPath": "/var/lib/docker/containers/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac/hosts",
	        "LogPath": "/var/lib/docker/containers/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac/2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac-json.log",
	        "Name": "/embed-certs-540723",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-540723:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-540723",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "2a8e7036c095bf3f1b322400e8a032cfbb3a6afd85df085cdd535eb4968de8ac",
	                "LowerDir": "/var/lib/docker/overlay2/3a46aba3d3fa36d1688f298eef2435a3cea67bbb3c511b7ec99146b0e3c0a3c8-init/diff:/var/lib/docker/overlay2/7cce95e9587a813ce5f3ee5f28c6de3b78ed608010774b6d981aecaad739a571/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3a46aba3d3fa36d1688f298eef2435a3cea67bbb3c511b7ec99146b0e3c0a3c8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3a46aba3d3fa36d1688f298eef2435a3cea67bbb3c511b7ec99146b0e3c0a3c8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3a46aba3d3fa36d1688f298eef2435a3cea67bbb3c511b7ec99146b0e3c0a3c8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-540723",
	                "Source": "/var/lib/docker/volumes/embed-certs-540723/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-540723",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-540723",
	                "name.minikube.sigs.k8s.io": "embed-certs-540723",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "d5961479ef86f5c6241ccda80619f0ac95b74f2650418a80b341c55988e81a31",
	            "SandboxKey": "/var/run/docker/netns/d5961479ef86",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-540723": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:ff:0c:72:e4:0d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a14dbc80b5256e92aed3d52f6c0493401acc94d166367abd5c8623c0558292e8",
	                    "EndpointID": "35c3956c883dc892a1121dc487c4dff87e2ba899692680d0ad4acc66f5840b52",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-540723",
	                        "2a8e7036c095"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-540723 -n embed-certs-540723
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-540723 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-540723 logs -n 25: (1.814043634s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd                                                                                                                                        │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ force-systemd-env-115975 ssh cat /etc/containerd/config.toml                                                                                                                                                                                        │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ delete  │ -p force-systemd-env-115975                                                                                                                                                                                                                         │ force-systemd-env-115975     │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:34 UTC │
	│ start   │ -p cert-options-089440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd                     │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:34 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ cert-options-089440 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                                                         │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ ssh     │ -p cert-options-089440 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                                                       │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ delete  │ -p cert-options-089440                                                                                                                                                                                                                              │ cert-options-089440          │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:35 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:35 UTC │ 22 Nov 25 00:36 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-187160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ stop    │ -p old-k8s-version-187160 --alsologtostderr -v=3                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-187160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:36 UTC │
	│ start   │ -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:36 UTC │ 22 Nov 25 00:37 UTC │
	│ image   │ old-k8s-version-187160 image list --format=json                                                                                                                                                                                                     │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ pause   │ -p old-k8s-version-187160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ unpause │ -p old-k8s-version-187160 --alsologtostderr -v=1                                                                                                                                                                                                    │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ delete  │ -p old-k8s-version-187160                                                                                                                                                                                                                           │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ delete  │ -p old-k8s-version-187160                                                                                                                                                                                                                           │ old-k8s-version-187160       │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:37 UTC │
	│ start   │ -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:37 UTC │ 22 Nov 25 00:39 UTC │
	│ start   │ -p cert-expiration-285797 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd                                                                                                                                     │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:38 UTC │
	│ delete  │ -p cert-expiration-285797                                                                                                                                                                                                                           │ cert-expiration-285797       │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:38 UTC │
	│ start   │ -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:38 UTC │ 22 Nov 25 00:39 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-080784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ stop    │ -p default-k8s-diff-port-080784 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-080784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ start   │ -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:39:41
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:39:41.369390  221580 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:39:41.369501  221580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:39:41.369512  221580 out.go:374] Setting ErrFile to fd 2...
	I1122 00:39:41.369517  221580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:39:41.369776  221580 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:39:41.370139  221580 out.go:368] Setting JSON to false
	I1122 00:39:41.371056  221580 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4919,"bootTime":1763767063,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1122 00:39:41.371144  221580 start.go:143] virtualization:  
	I1122 00:39:41.372976  221580 out.go:179] * [default-k8s-diff-port-080784] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:39:41.374116  221580 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:39:41.374254  221580 notify.go:221] Checking for updates...
	I1122 00:39:41.377451  221580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:39:41.379340  221580 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:39:41.380446  221580 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1122 00:39:41.381542  221580 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:39:41.382885  221580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:39:41.384648  221580 config.go:182] Loaded profile config "default-k8s-diff-port-080784": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:39:41.385243  221580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:39:41.417144  221580 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:39:41.417258  221580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:39:41.480047  221580 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:39:41.470230442 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:39:41.480161  221580 docker.go:319] overlay module found
	I1122 00:39:41.481587  221580 out.go:179] * Using the docker driver based on existing profile
	I1122 00:39:41.482737  221580 start.go:309] selected driver: docker
	I1122 00:39:41.482751  221580 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-080784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-080784 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h
0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:39:41.482867  221580 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:39:41.483806  221580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:39:41.539880  221580 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:39:41.530966471 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:39:41.540215  221580 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:39:41.540249  221580 cni.go:84] Creating CNI manager for ""
	I1122 00:39:41.540307  221580 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:39:41.540401  221580 start.go:353] cluster config:
	{Name:default-k8s-diff-port-080784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-080784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:39:41.541946  221580 out.go:179] * Starting "default-k8s-diff-port-080784" primary control-plane node in "default-k8s-diff-port-080784" cluster
	I1122 00:39:41.543130  221580 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:39:41.544293  221580 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:39:41.545318  221580 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:39:41.545364  221580 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1122 00:39:41.545376  221580 cache.go:65] Caching tarball of preloaded images
	I1122 00:39:41.545402  221580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:39:41.545450  221580 preload.go:238] Found /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1122 00:39:41.545461  221580 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:39:41.545579  221580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/config.json ...
	I1122 00:39:41.564830  221580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:39:41.564857  221580 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:39:41.564877  221580 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:39:41.564901  221580 start.go:360] acquireMachinesLock for default-k8s-diff-port-080784: {Name:mkf1922f37d9de5f76466cb066f0a541ae9dceb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:39:41.564973  221580 start.go:364] duration metric: took 47.656µs to acquireMachinesLock for "default-k8s-diff-port-080784"
	I1122 00:39:41.564996  221580 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:39:41.565007  221580 fix.go:54] fixHost starting: 
	I1122 00:39:41.565283  221580 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:39:41.582199  221580 fix.go:112] recreateIfNeeded on default-k8s-diff-port-080784: state=Stopped err=<nil>
	W1122 00:39:41.582228  221580 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:39:41.583677  221580 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-080784" ...
	I1122 00:39:41.583770  221580 cli_runner.go:164] Run: docker start default-k8s-diff-port-080784
	I1122 00:39:41.854244  221580 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:39:41.875433  221580 kic.go:430] container "default-k8s-diff-port-080784" state is running.
	I1122 00:39:41.878528  221580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-080784
	I1122 00:39:41.905178  221580 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/config.json ...
	I1122 00:39:41.905412  221580 machine.go:94] provisionDockerMachine start ...
	I1122 00:39:41.905480  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:41.925565  221580 main.go:143] libmachine: Using SSH client type: native
	I1122 00:39:41.925946  221580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:39:41.925959  221580 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:39:41.926965  221580 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44478->127.0.0.1:33073: read: connection reset by peer
	I1122 00:39:45.132864  221580 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-080784
	
	I1122 00:39:45.132897  221580 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-080784"
	I1122 00:39:45.133002  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:45.155213  221580 main.go:143] libmachine: Using SSH client type: native
	I1122 00:39:45.155550  221580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:39:45.155591  221580 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-080784 && echo "default-k8s-diff-port-080784" | sudo tee /etc/hostname
	I1122 00:39:45.329836  221580 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-080784
	
	I1122 00:39:45.330113  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:45.351056  221580 main.go:143] libmachine: Using SSH client type: native
	I1122 00:39:45.351421  221580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33073 <nil> <nil>}
	I1122 00:39:45.351445  221580 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-080784' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-080784/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-080784' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:39:45.497595  221580 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:39:45.497688  221580 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-2332/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-2332/.minikube}
	I1122 00:39:45.497739  221580 ubuntu.go:190] setting up certificates
	I1122 00:39:45.497765  221580 provision.go:84] configureAuth start
	I1122 00:39:45.497850  221580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-080784
	I1122 00:39:45.516474  221580 provision.go:143] copyHostCerts
	I1122 00:39:45.516544  221580 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem, removing ...
	I1122 00:39:45.516561  221580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem
	I1122 00:39:45.516637  221580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem (1078 bytes)
	I1122 00:39:45.516741  221580 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem, removing ...
	I1122 00:39:45.516759  221580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem
	I1122 00:39:45.516787  221580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem (1123 bytes)
	I1122 00:39:45.516844  221580 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem, removing ...
	I1122 00:39:45.516853  221580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem
	I1122 00:39:45.516883  221580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem (1675 bytes)
	I1122 00:39:45.516934  221580 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-080784 san=[127.0.0.1 192.168.85.2 default-k8s-diff-port-080784 localhost minikube]
	I1122 00:39:46.115480  221580 provision.go:177] copyRemoteCerts
	I1122 00:39:46.115547  221580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:39:46.115603  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:46.133323  221580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:39:46.235187  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1122 00:39:46.252569  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1122 00:39:46.271475  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:39:46.289707  221580 provision.go:87] duration metric: took 791.907589ms to configureAuth
	I1122 00:39:46.289737  221580 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:39:46.289938  221580 config.go:182] Loaded profile config "default-k8s-diff-port-080784": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:39:46.289950  221580 machine.go:97] duration metric: took 4.384521801s to provisionDockerMachine
	I1122 00:39:46.289959  221580 start.go:293] postStartSetup for "default-k8s-diff-port-080784" (driver="docker")
	I1122 00:39:46.289973  221580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:39:46.290029  221580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:39:46.290075  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:46.307439  221580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:39:46.407271  221580 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:39:46.410445  221580 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:39:46.410475  221580 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:39:46.410486  221580 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/addons for local assets ...
	I1122 00:39:46.410539  221580 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/files for local assets ...
	I1122 00:39:46.410626  221580 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem -> 56232.pem in /etc/ssl/certs
	I1122 00:39:46.410724  221580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:39:46.417883  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:39:46.441171  221580 start.go:296] duration metric: took 151.193977ms for postStartSetup
	I1122 00:39:46.441273  221580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:39:46.441333  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:46.458835  221580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:39:46.556444  221580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:39:46.561808  221580 fix.go:56] duration metric: took 4.996795473s for fixHost
	I1122 00:39:46.561837  221580 start.go:83] releasing machines lock for "default-k8s-diff-port-080784", held for 4.99685181s
	I1122 00:39:46.561902  221580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-080784
	I1122 00:39:46.579458  221580 ssh_runner.go:195] Run: cat /version.json
	I1122 00:39:46.579517  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:46.579818  221580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:39:46.579876  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:46.605196  221580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:39:46.606351  221580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/default-k8s-diff-port-080784/id_rsa Username:docker}
	I1122 00:39:46.794138  221580 ssh_runner.go:195] Run: systemctl --version
	I1122 00:39:46.800928  221580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:39:46.805281  221580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:39:46.805361  221580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:39:46.812858  221580 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:39:46.812882  221580 start.go:496] detecting cgroup driver to use...
	I1122 00:39:46.812912  221580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:39:46.812959  221580 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:39:46.830128  221580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:39:46.843636  221580 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:39:46.843754  221580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:39:46.859963  221580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:39:46.873652  221580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:39:47.007322  221580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:39:47.127996  221580 docker.go:234] disabling docker service ...
	I1122 00:39:47.128065  221580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:39:47.142986  221580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:39:47.156401  221580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:39:47.274070  221580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:39:47.433764  221580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:39:47.458768  221580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:39:47.479356  221580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:39:47.491025  221580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:39:47.502447  221580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1122 00:39:47.502569  221580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1122 00:39:47.516780  221580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:39:47.527247  221580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:39:47.537916  221580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:39:47.550453  221580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:39:47.563942  221580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:39:47.575324  221580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:39:47.588125  221580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:39:47.599481  221580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:39:47.607909  221580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:39:47.624374  221580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:39:47.780232  221580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:39:48.013304  221580 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:39:48.013419  221580 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:39:48.021440  221580 start.go:564] Will wait 60s for crictl version
	I1122 00:39:48.021508  221580 ssh_runner.go:195] Run: which crictl
	I1122 00:39:48.026151  221580 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:39:48.076817  221580 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:39:48.076890  221580 ssh_runner.go:195] Run: containerd --version
	I1122 00:39:48.099870  221580 ssh_runner.go:195] Run: containerd --version
	I1122 00:39:48.129385  221580 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1122 00:39:48.132370  221580 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-080784 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:39:48.148516  221580 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1122 00:39:48.156203  221580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:39:48.173281  221580 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-080784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-080784 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString:
Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:39:48.173419  221580 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:39:48.173485  221580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:39:48.208052  221580 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:39:48.208078  221580 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:39:48.208136  221580 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:39:48.243658  221580 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:39:48.243685  221580 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:39:48.243693  221580 kubeadm.go:935] updating node { 192.168.85.2 8444 v1.34.1 containerd true true} ...
	I1122 00:39:48.243798  221580 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-080784 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-080784 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:39:48.243866  221580 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:39:48.274307  221580 cni.go:84] Creating CNI manager for ""
	I1122 00:39:48.274334  221580 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:39:48.274357  221580 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1122 00:39:48.274379  221580 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-080784 NodeName:default-k8s-diff-port-080784 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/c
erts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:39:48.274497  221580 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-080784"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:39:48.274570  221580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:39:48.287244  221580 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:39:48.287312  221580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:39:48.295008  221580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (332 bytes)
	I1122 00:39:48.310783  221580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:39:48.324497  221580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2241 bytes)
	I1122 00:39:48.338795  221580 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:39:48.342444  221580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:39:48.352818  221580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:39:48.518434  221580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:39:48.538925  221580 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784 for IP: 192.168.85.2
	I1122 00:39:48.538997  221580 certs.go:195] generating shared ca certs ...
	I1122 00:39:48.539037  221580 certs.go:227] acquiring lock for ca certs: {Name:mk348a892ec4309987f6c81ee1acef4884ca62db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:39:48.539253  221580 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key
	I1122 00:39:48.539346  221580 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key
	I1122 00:39:48.539374  221580 certs.go:257] generating profile certs ...
	I1122 00:39:48.539515  221580 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/client.key
	I1122 00:39:48.539643  221580 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/apiserver.key.1a80c1c6
	I1122 00:39:48.539732  221580 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/proxy-client.key
	I1122 00:39:48.539898  221580 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem (1338 bytes)
	W1122 00:39:48.539967  221580 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623_empty.pem, impossibly tiny 0 bytes
	I1122 00:39:48.540007  221580 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:39:48.540065  221580 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:39:48.540135  221580 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:39:48.540199  221580 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem (1675 bytes)
	I1122 00:39:48.540293  221580 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:39:48.541212  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:39:48.574823  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:39:48.602115  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:39:48.641380  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:39:48.665583  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1122 00:39:48.703501  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:39:48.750105  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:39:48.801458  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:39:48.830355  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem --> /usr/share/ca-certificates/5623.pem (1338 bytes)
	I1122 00:39:48.893624  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /usr/share/ca-certificates/56232.pem (1708 bytes)
	I1122 00:39:48.941271  221580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:39:48.991138  221580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:39:49.015792  221580 ssh_runner.go:195] Run: openssl version
	I1122 00:39:49.025948  221580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:39:49.035219  221580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:39:49.039986  221580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:39:49.040098  221580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:39:49.083464  221580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:39:49.096068  221580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5623.pem && ln -fs /usr/share/ca-certificates/5623.pem /etc/ssl/certs/5623.pem"
	I1122 00:39:49.108900  221580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5623.pem
	I1122 00:39:49.114052  221580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/5623.pem
	I1122 00:39:49.114202  221580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5623.pem
	I1122 00:39:49.185413  221580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5623.pem /etc/ssl/certs/51391683.0"
	I1122 00:39:49.195874  221580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56232.pem && ln -fs /usr/share/ca-certificates/56232.pem /etc/ssl/certs/56232.pem"
	I1122 00:39:49.206719  221580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56232.pem
	I1122 00:39:49.211067  221580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/56232.pem
	I1122 00:39:49.211191  221580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56232.pem
	I1122 00:39:49.263330  221580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56232.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:39:49.287245  221580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:39:49.296863  221580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:39:49.384891  221580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:39:49.510892  221580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:39:49.587389  221580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:39:49.699742  221580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:39:49.784647  221580 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:39:49.887696  221580 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-080784 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-080784 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mo
unt9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:39:49.887786  221580 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:39:49.887869  221580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:39:49.980715  221580 cri.go:89] found id: "dd1c31c190b8d58736153cdfd0ab482dc6475f2b7815a864c12f337e871ecedf"
	I1122 00:39:49.980733  221580 cri.go:89] found id: "0d64e81dc0ad86dfdc788f45b92eb4f6b62dbaf055c1e11c826378c9472097ed"
	I1122 00:39:49.980738  221580 cri.go:89] found id: "3951649c708fd5267d0ad4e41c0bfc6891129e251b06f5aa4d39d9c92c5aefd9"
	I1122 00:39:49.980742  221580 cri.go:89] found id: "252561cb6cab27d6a08d413150f7d821814252ec16e8d8b445220ccf8ed920c2"
	I1122 00:39:49.980745  221580 cri.go:89] found id: "3b6f77ac2c3c3d3ce2d9fb2efa01e84808ffcdc9a6c4657767c211ebd5bddbd1"
	I1122 00:39:49.980749  221580 cri.go:89] found id: "d9ab3ff2e6b49bf65ed2711f9dfb88ffa0b207339e178767951977bb5979d8bb"
	I1122 00:39:49.980753  221580 cri.go:89] found id: "1d53549631ceb733afa5892dc05607424c2b5352e3b607632d6fe7db11205546"
	I1122 00:39:49.980756  221580 cri.go:89] found id: "e6948214b5c72c4b8f9a109a57b816f6a486408644295454dbb384df552ea8d7"
	I1122 00:39:49.980759  221580 cri.go:89] found id: "1f283db038f6611fb92be8c77623b177cb33d57f8a5645f03b6d191a2594fc2d"
	I1122 00:39:49.980766  221580 cri.go:89] found id: ""
	I1122 00:39:49.980819  221580 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1122 00:39:50.010636  221580 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"3bacbfeb8918ddfd5966420af966056e25ccf56e5e672071fef18d31574f1a87","pid":971,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bacbfeb8918ddfd5966420af966056e25ccf56e5e672071fef18d31574f1a87","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/3bacbfeb8918ddfd5966420af966056e25ccf56e5e672071fef18d31574f1a87/rootfs","created":"2025-11-22T00:39:49.79805525Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"3bacbfeb8918ddfd5966420af966056e25ccf56e5e672071fef18d31574f1a87","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-default-k8s-diff-port-080784_efc5d5e803936f99dd50b27dc38b33d5","io.kubernetes.cri.san
dbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-default-k8s-diff-port-080784","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"efc5d5e803936f99dd50b27dc38b33d5"},"owner":"root"},{"ociVersion":"1.2.1","id":"5fedbd49c6696f7aa824e1c8048d8b0ce189df7f0f06d31e23028e6d49374f04","pid":928,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5fedbd49c6696f7aa824e1c8048d8b0ce189df7f0f06d31e23028e6d49374f04","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/5fedbd49c6696f7aa824e1c8048d8b0ce189df7f0f06d31e23028e6d49374f04/rootfs","created":"2025-11-22T00:39:49.684893256Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"204","io.kubernetes.cri.sandbox-id":"5fedbd49c6696f7aa824e1c8048d8b0ce189df7f0f06d31e23028e6d49374
f04","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-default-k8s-diff-port-080784_12f35e793c097a61d745ed686a5452e0","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-default-k8s-diff-port-080784","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"12f35e793c097a61d745ed686a5452e0"},"owner":"root"},{"ociVersion":"1.2.1","id":"79609b38ce1a08a8a0da0cc3db1a46664a580fb2e15a808b459359e9b373cca7","pid":955,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79609b38ce1a08a8a0da0cc3db1a46664a580fb2e15a808b459359e9b373cca7","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/79609b38ce1a08a8a0da0cc3db1a46664a580fb2e15a808b459359e9b373cca7/rootfs","created":"2025-11-22T00:39:49.817115997Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-perio
d":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"79609b38ce1a08a8a0da0cc3db1a46664a580fb2e15a808b459359e9b373cca7","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-scheduler-default-k8s-diff-port-080784_edccddf3822c7550d1a7e9da5b6a9bdd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-default-k8s-diff-port-080784","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"edccddf3822c7550d1a7e9da5b6a9bdd"},"owner":"root"},{"ociVersion":"1.2.1","id":"8c36f086e11bed2e36716bb07b22314219215f6987d54bed6d60f3ba081b4926","pid":828,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c36f086e11bed2e36716bb07b22314219215f6987d54bed6d60f3ba081b4926","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/8c36f086e11bed2e36716bb07b22314219215f6987d54bed6d60f3ba081b4926/rootfs","created":"2025-11-22T00:39:49.4252197
78Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"8c36f086e11bed2e36716bb07b22314219215f6987d54bed6d60f3ba081b4926","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-default-k8s-diff-port-080784_e8812db65d6741249779c479d87b4c4d","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-080784","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e8812db65d6741249779c479d87b4c4d"},"owner":"root"},{"ociVersion":"1.2.1","id":"dd1c31c190b8d58736153cdfd0ab482dc6475f2b7815a864c12f337e871ecedf","pid":965,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/dd1c31c190b8d58736153cdfd0ab482dc6475f2b7815a864c12f337e871ecedf","rootfs":"/run/conta
inerd/io.containerd.runtime.v2.task/k8s.io/dd1c31c190b8d58736153cdfd0ab482dc6475f2b7815a864c12f337e871ecedf/rootfs","created":"2025-11-22T00:39:49.849545515Z","annotations":{"io.kubernetes.cri.container-name":"etcd","io.kubernetes.cri.container-type":"container","io.kubernetes.cri.image-name":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri.sandbox-id":"8c36f086e11bed2e36716bb07b22314219215f6987d54bed6d60f3ba081b4926","io.kubernetes.cri.sandbox-name":"etcd-default-k8s-diff-port-080784","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"e8812db65d6741249779c479d87b4c4d"},"owner":"root"}]
	I1122 00:39:50.010808  221580 cri.go:126] list returned 5 containers
	I1122 00:39:50.010818  221580 cri.go:129] container: {ID:3bacbfeb8918ddfd5966420af966056e25ccf56e5e672071fef18d31574f1a87 Status:running}
	I1122 00:39:50.010842  221580 cri.go:131] skipping 3bacbfeb8918ddfd5966420af966056e25ccf56e5e672071fef18d31574f1a87 - not in ps
	I1122 00:39:50.010847  221580 cri.go:129] container: {ID:5fedbd49c6696f7aa824e1c8048d8b0ce189df7f0f06d31e23028e6d49374f04 Status:running}
	I1122 00:39:50.010852  221580 cri.go:131] skipping 5fedbd49c6696f7aa824e1c8048d8b0ce189df7f0f06d31e23028e6d49374f04 - not in ps
	I1122 00:39:50.010856  221580 cri.go:129] container: {ID:79609b38ce1a08a8a0da0cc3db1a46664a580fb2e15a808b459359e9b373cca7 Status:running}
	I1122 00:39:50.010869  221580 cri.go:131] skipping 79609b38ce1a08a8a0da0cc3db1a46664a580fb2e15a808b459359e9b373cca7 - not in ps
	I1122 00:39:50.010873  221580 cri.go:129] container: {ID:8c36f086e11bed2e36716bb07b22314219215f6987d54bed6d60f3ba081b4926 Status:running}
	I1122 00:39:50.010878  221580 cri.go:131] skipping 8c36f086e11bed2e36716bb07b22314219215f6987d54bed6d60f3ba081b4926 - not in ps
	I1122 00:39:50.010882  221580 cri.go:129] container: {ID:dd1c31c190b8d58736153cdfd0ab482dc6475f2b7815a864c12f337e871ecedf Status:running}
	I1122 00:39:50.010890  221580 cri.go:135] skipping {dd1c31c190b8d58736153cdfd0ab482dc6475f2b7815a864c12f337e871ecedf running}: state = "running", want "paused"
	I1122 00:39:50.010969  221580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:39:50.043828  221580 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:39:50.043845  221580 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:39:50.043903  221580 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:39:50.060166  221580 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:39:50.060976  221580 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-080784" does not appear in /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:39:50.061472  221580 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-2332/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-080784" cluster setting kubeconfig missing "default-k8s-diff-port-080784" context setting]
	I1122 00:39:50.062229  221580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:39:50.064289  221580 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:39:50.080590  221580 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1122 00:39:50.080625  221580 kubeadm.go:602] duration metric: took 36.769959ms to restartPrimaryControlPlane
	I1122 00:39:50.080635  221580 kubeadm.go:403] duration metric: took 192.950932ms to StartCluster
	I1122 00:39:50.080649  221580 settings.go:142] acquiring lock: {Name:mk5b79634916fd13f05f4c848ff3e8b07cafa39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:39:50.080712  221580 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:39:50.082220  221580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:39:50.082474  221580 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:39:50.082938  221580 config.go:182] Loaded profile config "default-k8s-diff-port-080784": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:39:50.082981  221580 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:39:50.083048  221580 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-080784"
	I1122 00:39:50.083062  221580 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-080784"
	W1122 00:39:50.083069  221580 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:39:50.083090  221580 host.go:66] Checking if "default-k8s-diff-port-080784" exists ...
	I1122 00:39:50.084042  221580 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:39:50.084220  221580 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-080784"
	I1122 00:39:50.084342  221580 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-080784"
	I1122 00:39:50.084636  221580 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:39:50.084779  221580 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-080784"
	I1122 00:39:50.084797  221580 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-080784"
	W1122 00:39:50.084805  221580 addons.go:248] addon metrics-server should already be in state true
	I1122 00:39:50.084831  221580 host.go:66] Checking if "default-k8s-diff-port-080784" exists ...
	I1122 00:39:50.085331  221580 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:39:50.090522  221580 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-080784"
	I1122 00:39:50.090559  221580 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-080784"
	W1122 00:39:50.090569  221580 addons.go:248] addon dashboard should already be in state true
	I1122 00:39:50.090607  221580 host.go:66] Checking if "default-k8s-diff-port-080784" exists ...
	I1122 00:39:50.090835  221580 out.go:179] * Verifying Kubernetes components...
	I1122 00:39:50.091393  221580 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:39:50.096258  221580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:39:50.163705  221580 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:39:50.169037  221580 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:39:50.169060  221580 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:39:50.169126  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:50.204360  221580 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-080784"
	W1122 00:39:50.204382  221580 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:39:50.204405  221580 host.go:66] Checking if "default-k8s-diff-port-080784" exists ...
	I1122 00:39:50.204810  221580 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-080784 --format={{.State.Status}}
	I1122 00:39:50.245576  221580 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1122 00:39:50.249258  221580 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1122 00:39:50.249300  221580 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1122 00:39:50.249371  221580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-080784
	I1122 00:39:50.262933  221580 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:39:50.268315  221580 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	417642f210950       1611cd07b61d5       10 seconds ago       Running             busybox                   0                   d0df139c45090       busybox                                      default
	ba940c4a9dbef       ba04bb24b9575       15 seconds ago       Running             storage-provisioner       0                   95b575225df8c       storage-provisioner                          kube-system
	5a7b746b45e8d       138784d87c9c5       15 seconds ago       Running             coredns                   0                   5451ff5098f78       coredns-66bc5c9577-kbk5c                     kube-system
	e96eec53caa12       b1a8c6f707935       57 seconds ago       Running             kindnet-cni               0                   d7abdafee62af       kindnet-bls8b                                kube-system
	b17cc18fbe3e8       05baa95f5142d       57 seconds ago       Running             kube-proxy                0                   5c0789192c67b       kube-proxy-vgr8w                             kube-system
	09d0d29b2b446       7eb2c6ff0c5a7       About a minute ago   Running             kube-controller-manager   0                   4f1cd6769a291       kube-controller-manager-embed-certs-540723   kube-system
	d4ed557ae39e2       43911e833d64d       About a minute ago   Running             kube-apiserver            0                   07cbe223c77f2       kube-apiserver-embed-certs-540723            kube-system
	8e2888c32825b       b5f57ec6b9867       About a minute ago   Running             kube-scheduler            0                   482168df46067       kube-scheduler-embed-certs-540723            kube-system
	5b278fa6de142       a1894772a478e       About a minute ago   Running             etcd                      0                   c90bb953cbcad       etcd-embed-certs-540723                      kube-system
	
	
	==> containerd <==
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.206817062Z" level=info msg="Container ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.210586557Z" level=info msg="CreateContainer within sandbox \"5451ff5098f7819cc5e18faf2c2a9ed5c4573a10ad9ddffa889412647bd68ff3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f\""
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.211454992Z" level=info msg="StartContainer for \"5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f\""
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.212784839Z" level=info msg="connecting to shim 5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f" address="unix:///run/containerd/s/16ea7e1cb9fee3ccba69bad3d6662daa7b8c9e334a0d2718bd814f715c339c13" protocol=ttrpc version=3
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.221172523Z" level=info msg="CreateContainer within sandbox \"95b575225df8c54a343f4540072d2aef76f3467187a1d32156243e1d6acac8af\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2\""
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.223886594Z" level=info msg="StartContainer for \"ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2\""
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.225704391Z" level=info msg="connecting to shim ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2" address="unix:///run/containerd/s/5a07a40a8ec11c485ebf638844b5e350d89102ad034c4ccf68e1e904b846b7c3" protocol=ttrpc version=3
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.300402358Z" level=info msg="StartContainer for \"ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2\" returns successfully"
	Nov 22 00:39:36 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:36.309841936Z" level=info msg="StartContainer for \"5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f\" returns successfully"
	Nov 22 00:39:39 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:39.619718047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17360a56-547d-4ae3-8398-71b0138ab6da,Namespace:default,Attempt:0,}"
	Nov 22 00:39:39 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:39.658059434Z" level=info msg="connecting to shim d0df139c45090491664b7043e2723b014525cb7f88488dcb0f54203446309bad" address="unix:///run/containerd/s/c1209c15a2dce81dcbe0e3c34109f8b81ba3dd2b120fbca121035110dcedd9ed" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:39:39 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:39.711711945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:17360a56-547d-4ae3-8398-71b0138ab6da,Namespace:default,Attempt:0,} returns sandbox id \"d0df139c45090491664b7043e2723b014525cb7f88488dcb0f54203446309bad\""
	Nov 22 00:39:39 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:39.715754354Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.954035377Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.955550926Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937187"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.957250798Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.960549673Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.961366964Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.245565094s"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.962364606Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.972284068Z" level=info msg="CreateContainer within sandbox \"d0df139c45090491664b7043e2723b014525cb7f88488dcb0f54203446309bad\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.989535445Z" level=info msg="Container 417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:39:41 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.997242627Z" level=info msg="CreateContainer within sandbox \"d0df139c45090491664b7043e2723b014525cb7f88488dcb0f54203446309bad\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79\""
	Nov 22 00:39:42 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:41.999086451Z" level=info msg="StartContainer for \"417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79\""
	Nov 22 00:39:42 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:42.000528554Z" level=info msg="connecting to shim 417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79" address="unix:///run/containerd/s/c1209c15a2dce81dcbe0e3c34109f8b81ba3dd2b120fbca121035110dcedd9ed" protocol=ttrpc version=3
	Nov 22 00:39:42 embed-certs-540723 containerd[757]: time="2025-11-22T00:39:42.122723165Z" level=info msg="StartContainer for \"417642f210950d246f2a17a61fbc30d2eb045f214fc30d2dbf4906a0d896ef79\" returns successfully"
	
	
	==> coredns [5a7b746b45e8d888b198cdf85f23f45e6c62878404d2f2efe38caef5d954731f] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45472 - 15231 "HINFO IN 3645877841633436492.8105226943450054855. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013994545s
	
	
	==> describe nodes <==
	Name:               embed-certs-540723
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-540723
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=embed-certs-540723
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_38_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:38:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-540723
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:39:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:39:50 +0000   Sat, 22 Nov 2025 00:38:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:39:50 +0000   Sat, 22 Nov 2025 00:38:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:39:50 +0000   Sat, 22 Nov 2025 00:38:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:39:50 +0000   Sat, 22 Nov 2025 00:39:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-540723
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                0ba07f99-f6b3-4765-89ca-c97702e7d0a8
	  Boot ID:                    4e86741a-5896-4eb6-97ce-70ea8beedc67
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-66bc5c9577-kbk5c                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     58s
	  kube-system                 etcd-embed-certs-540723                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         63s
	  kube-system                 kindnet-bls8b                                 100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-embed-certs-540723             250m (12%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-controller-manager-embed-certs-540723    200m (10%)    0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 kube-proxy-vgr8w                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-embed-certs-540723             100m (5%)     0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 57s   kube-proxy       
	  Normal   Starting                 64s   kubelet          Starting kubelet.
	  Warning  CgroupV1                 64s   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  63s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  63s   kubelet          Node embed-certs-540723 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    63s   kubelet          Node embed-certs-540723 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     63s   kubelet          Node embed-certs-540723 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           60s   node-controller  Node embed-certs-540723 event: Registered Node embed-certs-540723 in Controller
	  Normal   NodeReady                17s   kubelet          Node embed-certs-540723 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.017121] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498034] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037542] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808656] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.648915] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 23:58] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=00000000f9ea0775{9P.session} n=0000000035823f74
	[  +0.001177] FS-Cache: O-key=[10] '34323935353131333738'
	[  +0.000819] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000f9ea0775{9P.session} n=00000000dbfd8515
	[  +0.001154] FS-Cache: N-key=[10] '34323935353131333738'
	[Nov22 00:00] hrtimer: interrupt took 9958927 ns
	
	
	==> etcd [5b278fa6de1425390a0f2ab37ad3056a4ef9b8c25d34d2469e67e8d09035920b] <==
	{"level":"warn","ts":"2025-11-22T00:38:44.127513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.170279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.197824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.241956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.262434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.296075Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52296","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.322557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.357696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.391895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.412723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.465457Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.489436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.521165Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.559742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.577985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.611394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.647297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.675765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.707278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.749273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.796708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.825782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.838903Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.861372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:38:44.970472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52632","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:39:52 up  1:22,  0 user,  load average: 3.11, 3.49, 2.92
	Linux embed-certs-540723 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e96eec53caa12f2789abf288897d09a32050e76fa006aacd677ab995420f0510] <==
	I1122 00:38:55.426340       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:38:55.426639       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1122 00:38:55.426761       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:38:55.426780       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:38:55.426804       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:38:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:38:55.625448       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:38:55.625475       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:38:55.625484       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:38:55.625612       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1122 00:39:25.625983       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1122 00:39:25.626086       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1122 00:39:25.626164       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1122 00:39:25.626240       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1122 00:39:27.125606       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:39:27.125862       1 metrics.go:72] Registering metrics
	I1122 00:39:27.126034       1 controller.go:711] "Syncing nftables rules"
	I1122 00:39:35.624855       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:39:35.624913       1 main.go:301] handling current node
	I1122 00:39:45.623806       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1122 00:39:45.623923       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d4ed557ae39e2cd10f702f86bc509e03169955fb37eaef6049b2e395f4d794cf] <==
	I1122 00:38:46.242291       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1122 00:38:46.253247       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1122 00:38:46.255623       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1122 00:38:46.276027       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:46.276453       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1122 00:38:46.300545       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:46.301687       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:38:46.815770       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:38:46.822917       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:38:46.822948       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:38:47.636164       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:38:47.697858       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:38:47.818209       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:38:47.826106       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1122 00:38:47.827379       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:38:47.832958       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:38:47.906935       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:38:48.979089       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:38:49.011513       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:38:49.024120       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:38:53.740834       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:53.759037       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:38:53.905787       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1122 00:38:54.008989       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1122 00:39:47.510862       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:37434: use of closed network connection
	
	
	==> kube-controller-manager [09d0d29b2b4463ef0668b6e7a3bbcefa2bc1324d092277467c2c17aa98c89659] <==
	I1122 00:38:52.934439       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:38:52.941831       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1122 00:38:52.947825       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:38:52.948117       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:38:52.948181       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:38:52.948189       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:38:52.950364       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:38:52.952994       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:38:52.953515       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:38:52.953531       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1122 00:38:52.954402       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1122 00:38:52.954533       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-540723"
	I1122 00:38:52.953580       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1122 00:38:52.953645       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1122 00:38:52.953654       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:38:52.953675       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1122 00:38:52.953684       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:38:52.954609       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1122 00:38:52.953632       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:38:52.953612       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:38:52.958914       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1122 00:38:52.967829       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1122 00:38:52.974190       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1122 00:38:52.976532       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1122 00:39:38.001337       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [b17cc18fbe3e8015a538304c36e2e61a083b18d39873526539a1044c5af14384] <==
	I1122 00:38:55.096260       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:38:55.216607       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:38:55.324400       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:38:55.324439       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1122 00:38:55.324508       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:38:55.414647       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:38:55.414798       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:38:55.432300       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:38:55.436082       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:38:55.436118       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:38:55.437577       1 config.go:200] "Starting service config controller"
	I1122 00:38:55.437603       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:38:55.437619       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:38:55.437624       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:38:55.437635       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:38:55.437643       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:38:55.438797       1 config.go:309] "Starting node config controller"
	I1122 00:38:55.438812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:38:55.438820       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:38:55.538423       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:38:55.538465       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:38:55.538520       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8e2888c32825b3194c1ac5f65176e35c9019823beec79329c9c82e04e463c2c9] <==
	I1122 00:38:46.368490       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1122 00:38:46.376898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E1122 00:38:46.378055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1122 00:38:46.384023       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:38:46.384804       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1122 00:38:46.387221       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1122 00:38:46.387337       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:38:46.387685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1122 00:38:46.387834       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:38:46.387896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1122 00:38:46.387964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1122 00:38:46.388021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1122 00:38:46.388111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1122 00:38:46.388394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1122 00:38:46.388596       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1122 00:38:46.388990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1122 00:38:46.389284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1122 00:38:46.389575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1122 00:38:46.389791       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1122 00:38:46.390031       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1122 00:38:47.212709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1122 00:38:47.225057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1122 00:38:47.267881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1122 00:38:47.310781       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1122 00:38:47.867909       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:38:49 embed-certs-540723 kubelet[1473]: I1122 00:38:49.981160    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-540723" podStartSLOduration=0.981152844 podStartE2EDuration="981.152844ms" podCreationTimestamp="2025-11-22 00:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:49.940521707 +0000 UTC m=+1.165393005" watchObservedRunningTime="2025-11-22 00:38:49.981152844 +0000 UTC m=+1.206024108"
	Nov 22 00:38:50 embed-certs-540723 kubelet[1473]: I1122 00:38:50.013659    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-540723" podStartSLOduration=1.013636053 podStartE2EDuration="1.013636053s" podCreationTimestamp="2025-11-22 00:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:49.994899928 +0000 UTC m=+1.219771193" watchObservedRunningTime="2025-11-22 00:38:50.013636053 +0000 UTC m=+1.238507326"
	Nov 22 00:38:50 embed-certs-540723 kubelet[1473]: I1122 00:38:50.063372    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-540723" podStartSLOduration=1.063352188 podStartE2EDuration="1.063352188s" podCreationTimestamp="2025-11-22 00:38:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:50.014024087 +0000 UTC m=+1.238895368" watchObservedRunningTime="2025-11-22 00:38:50.063352188 +0000 UTC m=+1.288223461"
	Nov 22 00:38:52 embed-certs-540723 kubelet[1473]: I1122 00:38:52.934556    1473 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:38:52 embed-certs-540723 kubelet[1473]: I1122 00:38:52.935683    1473 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235796    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/87297594-e2ec-4d97-af64-37ac318d3bba-lib-modules\") pod \"kindnet-bls8b\" (UID: \"87297594-e2ec-4d97-af64-37ac318d3bba\") " pod="kube-system/kindnet-bls8b"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235850    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksplb\" (UniqueName: \"kubernetes.io/projected/87297594-e2ec-4d97-af64-37ac318d3bba-kube-api-access-ksplb\") pod \"kindnet-bls8b\" (UID: \"87297594-e2ec-4d97-af64-37ac318d3bba\") " pod="kube-system/kindnet-bls8b"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235874    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fae6b664-123d-4c6b-87fe-a48172bb5ec2-kube-proxy\") pod \"kube-proxy-vgr8w\" (UID: \"fae6b664-123d-4c6b-87fe-a48172bb5ec2\") " pod="kube-system/kube-proxy-vgr8w"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235897    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fae6b664-123d-4c6b-87fe-a48172bb5ec2-xtables-lock\") pod \"kube-proxy-vgr8w\" (UID: \"fae6b664-123d-4c6b-87fe-a48172bb5ec2\") " pod="kube-system/kube-proxy-vgr8w"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235916    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fae6b664-123d-4c6b-87fe-a48172bb5ec2-lib-modules\") pod \"kube-proxy-vgr8w\" (UID: \"fae6b664-123d-4c6b-87fe-a48172bb5ec2\") " pod="kube-system/kube-proxy-vgr8w"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235938    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/87297594-e2ec-4d97-af64-37ac318d3bba-xtables-lock\") pod \"kindnet-bls8b\" (UID: \"87297594-e2ec-4d97-af64-37ac318d3bba\") " pod="kube-system/kindnet-bls8b"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235953    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xcr\" (UniqueName: \"kubernetes.io/projected/fae6b664-123d-4c6b-87fe-a48172bb5ec2-kube-api-access-44xcr\") pod \"kube-proxy-vgr8w\" (UID: \"fae6b664-123d-4c6b-87fe-a48172bb5ec2\") " pod="kube-system/kube-proxy-vgr8w"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.235974    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/87297594-e2ec-4d97-af64-37ac318d3bba-cni-cfg\") pod \"kindnet-bls8b\" (UID: \"87297594-e2ec-4d97-af64-37ac318d3bba\") " pod="kube-system/kindnet-bls8b"
	Nov 22 00:38:54 embed-certs-540723 kubelet[1473]: I1122 00:38:54.361323    1473 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:38:56 embed-certs-540723 kubelet[1473]: I1122 00:38:56.154989    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-bls8b" podStartSLOduration=2.154968311 podStartE2EDuration="2.154968311s" podCreationTimestamp="2025-11-22 00:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:56.139915158 +0000 UTC m=+7.364786521" watchObservedRunningTime="2025-11-22 00:38:56.154968311 +0000 UTC m=+7.379839584"
	Nov 22 00:38:57 embed-certs-540723 kubelet[1473]: I1122 00:38:57.578497    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vgr8w" podStartSLOduration=3.57847757 podStartE2EDuration="3.57847757s" podCreationTimestamp="2025-11-22 00:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:38:56.157012696 +0000 UTC m=+7.381883985" watchObservedRunningTime="2025-11-22 00:38:57.57847757 +0000 UTC m=+8.803348835"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.714405    1473 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.775021    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a437499-56df-4fce-a457-cf615ef0abb8-config-volume\") pod \"coredns-66bc5c9577-kbk5c\" (UID: \"7a437499-56df-4fce-a457-cf615ef0abb8\") " pod="kube-system/coredns-66bc5c9577-kbk5c"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.775071    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/770a06bc-52a0-4be2-88ac-a35e62e96a5b-tmp\") pod \"storage-provisioner\" (UID: \"770a06bc-52a0-4be2-88ac-a35e62e96a5b\") " pod="kube-system/storage-provisioner"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.775100    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdk7d\" (UniqueName: \"kubernetes.io/projected/7a437499-56df-4fce-a457-cf615ef0abb8-kube-api-access-qdk7d\") pod \"coredns-66bc5c9577-kbk5c\" (UID: \"7a437499-56df-4fce-a457-cf615ef0abb8\") " pod="kube-system/coredns-66bc5c9577-kbk5c"
	Nov 22 00:39:35 embed-certs-540723 kubelet[1473]: I1122 00:39:35.775122    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tc6hf\" (UniqueName: \"kubernetes.io/projected/770a06bc-52a0-4be2-88ac-a35e62e96a5b-kube-api-access-tc6hf\") pod \"storage-provisioner\" (UID: \"770a06bc-52a0-4be2-88ac-a35e62e96a5b\") " pod="kube-system/storage-provisioner"
	Nov 22 00:39:37 embed-certs-540723 kubelet[1473]: I1122 00:39:37.263920    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kbk5c" podStartSLOduration=43.263891139 podStartE2EDuration="43.263891139s" podCreationTimestamp="2025-11-22 00:38:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:39:37.239250985 +0000 UTC m=+48.464122299" watchObservedRunningTime="2025-11-22 00:39:37.263891139 +0000 UTC m=+48.488762412"
	Nov 22 00:39:37 embed-certs-540723 kubelet[1473]: I1122 00:39:37.278256    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=42.278236113 podStartE2EDuration="42.278236113s" podCreationTimestamp="2025-11-22 00:38:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:39:37.265696472 +0000 UTC m=+48.490567754" watchObservedRunningTime="2025-11-22 00:39:37.278236113 +0000 UTC m=+48.503107378"
	Nov 22 00:39:39 embed-certs-540723 kubelet[1473]: I1122 00:39:39.406484    1473 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdblb\" (UniqueName: \"kubernetes.io/projected/17360a56-547d-4ae3-8398-71b0138ab6da-kube-api-access-bdblb\") pod \"busybox\" (UID: \"17360a56-547d-4ae3-8398-71b0138ab6da\") " pod="default/busybox"
	Nov 22 00:39:42 embed-certs-540723 kubelet[1473]: I1122 00:39:42.264423    1473 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.014626334 podStartE2EDuration="3.264406278s" podCreationTimestamp="2025-11-22 00:39:39 +0000 UTC" firstStartedPulling="2025-11-22 00:39:39.713542304 +0000 UTC m=+50.938413569" lastFinishedPulling="2025-11-22 00:39:41.963322248 +0000 UTC m=+53.188193513" observedRunningTime="2025-11-22 00:39:42.263899358 +0000 UTC m=+53.488770623" watchObservedRunningTime="2025-11-22 00:39:42.264406278 +0000 UTC m=+53.489277542"
	
	
	==> storage-provisioner [ba940c4a9dbef36a7420c1fa71906cafe7b528b54683284ac7b75c614dedfda2] <==
	I1122 00:39:36.357387       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:39:36.360189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:36.366862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:39:36.367295       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:39:36.367655       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-540723_306e2b8d-7a0d-4086-a032-71a1737ff414!
	I1122 00:39:36.367917       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7cee4bc9-5e9f-4de6-955b-19e733e02539", APIVersion:"v1", ResourceVersion:"419", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-540723_306e2b8d-7a0d-4086-a032-71a1737ff414 became leader
	W1122 00:39:36.370740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:36.376434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:39:36.468784       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-540723_306e2b8d-7a0d-4086-a032-71a1737ff414!
	W1122 00:39:38.379813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:38.385005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:40.388948       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:40.399057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:42.408002       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:42.416852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:44.420772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:44.425561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:46.429338       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:46.435165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:48.438195       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:48.453200       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:50.458218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:50.472949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:52.477475       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:39:52.486592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-540723 -n embed-certs-540723
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-540723 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (14.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (16.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-734654 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.003058126s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-734654 exec busybox -- /bin/sh -c "ulimit -n"
start_stop_delete_test.go:194: 'ulimit -n' returned 1024, expected 1048576
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-734654
helpers_test.go:243: (dbg) docker inspect no-preload-734654:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45",
	        "Created": "2025-11-22T00:41:00.477134155Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229777,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:41:00.578746991Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45/hosts",
	        "LogPath": "/var/lib/docker/containers/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45-json.log",
	        "Name": "/no-preload-734654",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-734654:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-734654",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45",
	                "LowerDir": "/var/lib/docker/overlay2/e031e4fb56e6924d708bf512ea3967c95e3578b6219afae593a0888cc6506ce5-init/diff:/var/lib/docker/overlay2/7cce95e9587a813ce5f3ee5f28c6de3b78ed608010774b6d981aecaad739a571/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e031e4fb56e6924d708bf512ea3967c95e3578b6219afae593a0888cc6506ce5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e031e4fb56e6924d708bf512ea3967c95e3578b6219afae593a0888cc6506ce5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e031e4fb56e6924d708bf512ea3967c95e3578b6219afae593a0888cc6506ce5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-734654",
	                "Source": "/var/lib/docker/volumes/no-preload-734654/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-734654",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-734654",
	                "name.minikube.sigs.k8s.io": "no-preload-734654",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c3472e65b21ddadf93ac73507f39b1ff5447afb97867a9e3177480e5290a239",
	            "SandboxKey": "/var/run/docker/netns/5c3472e65b21",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-734654": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:66:c2:ee:21:b2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "321d9f6a63d209046b671cd2254c246e2830ed82acde6626fbf626526ff0f2e7",
	                    "EndpointID": "29bff7b62d2706ee730f5affdc9a972c11c0a504233bc26c3ccbf75e38c8fcf5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-734654",
	                        "b1ccdb27c213"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-734654 -n no-preload-734654
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-734654 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-734654 logs -n 25: (1.860984019s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-080784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ stop    │ -p default-k8s-diff-port-080784 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-080784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ start   │ -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-540723 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ stop    │ -p embed-certs-540723 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-540723 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ start   │ -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:41 UTC │
	│ image   │ default-k8s-diff-port-080784 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ pause   │ -p default-k8s-diff-port-080784 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ unpause │ -p default-k8s-diff-port-080784 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ delete  │ -p default-k8s-diff-port-080784                                                                                                                                                                                                                     │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ delete  │ -p default-k8s-diff-port-080784                                                                                                                                                                                                                     │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ delete  │ -p disable-driver-mounts-577767                                                                                                                                                                                                                     │ disable-driver-mounts-577767 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ start   │ -p no-preload-734654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-734654            │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:42 UTC │
	│ image   │ embed-certs-540723 image list --format=json                                                                                                                                                                                                         │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ pause   │ -p embed-certs-540723 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ unpause │ -p embed-certs-540723 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ delete  │ -p embed-certs-540723                                                                                                                                                                                                                               │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ delete  │ -p embed-certs-540723                                                                                                                                                                                                                               │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ start   │ -p newest-cni-953404 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:42 UTC │
	│ addons  │ enable metrics-server -p newest-cni-953404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │ 22 Nov 25 00:42 UTC │
	│ stop    │ -p newest-cni-953404 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │ 22 Nov 25 00:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-953404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │ 22 Nov 25 00:42 UTC │
	│ start   │ -p newest-cni-953404 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:42:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:42:09.185358  236928 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:42:09.185488  236928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:09.185499  236928 out.go:374] Setting ErrFile to fd 2...
	I1122 00:42:09.185505  236928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:09.185768  236928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:42:09.186143  236928 out.go:368] Setting JSON to false
	I1122 00:42:09.187070  236928 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5067,"bootTime":1763767063,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1122 00:42:09.187139  236928 start.go:143] virtualization:  
	I1122 00:42:09.190252  236928 out.go:179] * [newest-cni-953404] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:42:09.194157  236928 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:42:09.194332  236928 notify.go:221] Checking for updates...
	I1122 00:42:09.200126  236928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:42:09.203033  236928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:42:09.205978  236928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1122 00:42:09.208817  236928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:42:09.211849  236928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:42:09.215257  236928 config.go:182] Loaded profile config "newest-cni-953404": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:42:09.215973  236928 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:42:09.241253  236928 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:42:09.241357  236928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:42:09.317184  236928 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:42:09.305724865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:42:09.317293  236928 docker.go:319] overlay module found
	I1122 00:42:09.320532  236928 out.go:179] * Using the docker driver based on existing profile
	I1122 00:42:09.323510  236928 start.go:309] selected driver: docker
	I1122 00:42:09.323545  236928 start.go:930] validating driver "docker" against &{Name:newest-cni-953404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:42:09.323779  236928 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:42:09.324507  236928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:42:09.383356  236928 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:42:09.373480492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:42:09.383784  236928 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:42:09.383822  236928 cni.go:84] Creating CNI manager for ""
	I1122 00:42:09.383881  236928 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:42:09.383927  236928 start.go:353] cluster config:
	{Name:newest-cni-953404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:42:09.387050  236928 out.go:179] * Starting "newest-cni-953404" primary control-plane node in "newest-cni-953404" cluster
	I1122 00:42:09.389764  236928 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:42:09.392732  236928 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:42:09.395723  236928 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:42:09.395770  236928 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1122 00:42:09.395781  236928 cache.go:65] Caching tarball of preloaded images
	I1122 00:42:09.395816  236928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:42:09.395883  236928 preload.go:238] Found /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1122 00:42:09.395893  236928 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:42:09.396008  236928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/config.json ...
	I1122 00:42:09.417021  236928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:42:09.417046  236928 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:42:09.417061  236928 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:42:09.417084  236928 start.go:360] acquireMachinesLock for newest-cni-953404: {Name:mk9f77ab0cb88bc744c03e61f3cd82397d16e4c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:42:09.417146  236928 start.go:364] duration metric: took 37.867µs to acquireMachinesLock for "newest-cni-953404"
	I1122 00:42:09.417170  236928 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:42:09.417179  236928 fix.go:54] fixHost starting: 
	I1122 00:42:09.417442  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:09.436454  236928 fix.go:112] recreateIfNeeded on newest-cni-953404: state=Stopped err=<nil>
	W1122 00:42:09.436484  236928 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:42:09.756241  229475 node_ready.go:49] node "no-preload-734654" is "Ready"
	I1122 00:42:09.756268  229475 node_ready.go:38] duration metric: took 13.004360993s for node "no-preload-734654" to be "Ready" ...
	I1122 00:42:09.756282  229475 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:42:09.756337  229475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:42:09.788658  229475 api_server.go:72] duration metric: took 15.555756899s to wait for apiserver process to appear ...
	I1122 00:42:09.788682  229475 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:42:09.788712  229475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:42:09.799772  229475 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:42:09.801452  229475 api_server.go:141] control plane version: v1.34.1
	I1122 00:42:09.801480  229475 api_server.go:131] duration metric: took 12.791334ms to wait for apiserver health ...
	I1122 00:42:09.801490  229475 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:42:09.807973  229475 system_pods.go:59] 8 kube-system pods found
	I1122 00:42:09.808005  229475 system_pods.go:61] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:09.808011  229475 system_pods.go:61] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:09.808017  229475 system_pods.go:61] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:09.808021  229475 system_pods.go:61] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:09.808025  229475 system_pods.go:61] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:09.808029  229475 system_pods.go:61] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:09.808033  229475 system_pods.go:61] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:09.808038  229475 system_pods.go:61] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:09.808045  229475 system_pods.go:74] duration metric: took 6.548403ms to wait for pod list to return data ...
	I1122 00:42:09.808053  229475 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:42:09.814305  229475 default_sa.go:45] found service account: "default"
	I1122 00:42:09.814336  229475 default_sa.go:55] duration metric: took 6.27672ms for default service account to be created ...
	I1122 00:42:09.814469  229475 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:42:09.819778  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:09.819813  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:09.819820  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:09.819826  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:09.819830  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:09.819835  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:09.819839  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:09.819843  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:09.819849  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:09.819879  229475 retry.go:31] will retry after 276.611466ms: missing components: kube-dns
	I1122 00:42:10.104870  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:10.104905  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:10.104914  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:10.104920  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:10.104925  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:10.104931  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:10.104940  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:10.104945  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:10.104957  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:10.104975  229475 retry.go:31] will retry after 237.799926ms: missing components: kube-dns
	I1122 00:42:10.351220  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:10.351285  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:10.351293  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:10.351299  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:10.351303  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:10.351307  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:10.351310  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:10.351314  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:10.351319  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:10.351333  229475 retry.go:31] will retry after 343.711479ms: missing components: kube-dns
	I1122 00:42:10.701981  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:10.702013  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:10.702020  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:10.702026  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:10.702031  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:10.702036  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:10.702040  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:10.702044  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:10.702050  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:10.702065  229475 retry.go:31] will retry after 527.354094ms: missing components: kube-dns
	I1122 00:42:11.233280  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:11.233316  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Running
	I1122 00:42:11.233324  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:11.233329  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:11.233334  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:11.233339  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:11.233343  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:11.233397  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:11.233413  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Running
	I1122 00:42:11.233421  229475 system_pods.go:126] duration metric: took 1.418939899s to wait for k8s-apps to be running ...
	I1122 00:42:11.233429  229475 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:42:11.233515  229475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:42:11.247292  229475 system_svc.go:56] duration metric: took 13.852935ms WaitForService to wait for kubelet
	I1122 00:42:11.247320  229475 kubeadm.go:587] duration metric: took 17.014423776s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:42:11.247339  229475 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:42:11.250285  229475 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:42:11.250319  229475 node_conditions.go:123] node cpu capacity is 2
	I1122 00:42:11.250334  229475 node_conditions.go:105] duration metric: took 2.989512ms to run NodePressure ...
	I1122 00:42:11.250347  229475 start.go:242] waiting for startup goroutines ...
	I1122 00:42:11.250355  229475 start.go:247] waiting for cluster config update ...
	I1122 00:42:11.250366  229475 start.go:256] writing updated cluster config ...
	I1122 00:42:11.250692  229475 ssh_runner.go:195] Run: rm -f paused
	I1122 00:42:11.254727  229475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:42:11.258170  229475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7ddjv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.262696  229475 pod_ready.go:94] pod "coredns-66bc5c9577-7ddjv" is "Ready"
	I1122 00:42:11.262726  229475 pod_ready.go:86] duration metric: took 4.529135ms for pod "coredns-66bc5c9577-7ddjv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.265246  229475 pod_ready.go:83] waiting for pod "etcd-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.269680  229475 pod_ready.go:94] pod "etcd-no-preload-734654" is "Ready"
	I1122 00:42:11.269706  229475 pod_ready.go:86] duration metric: took 4.434152ms for pod "etcd-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.272011  229475 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.276732  229475 pod_ready.go:94] pod "kube-apiserver-no-preload-734654" is "Ready"
	I1122 00:42:11.276757  229475 pod_ready.go:86] duration metric: took 4.719194ms for pod "kube-apiserver-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.279214  229475 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.659228  229475 pod_ready.go:94] pod "kube-controller-manager-no-preload-734654" is "Ready"
	I1122 00:42:11.659266  229475 pod_ready.go:86] duration metric: took 380.026823ms for pod "kube-controller-manager-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.860500  229475 pod_ready.go:83] waiting for pod "kube-proxy-m2v57" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:12.258453  229475 pod_ready.go:94] pod "kube-proxy-m2v57" is "Ready"
	I1122 00:42:12.258484  229475 pod_ready.go:86] duration metric: took 397.956677ms for pod "kube-proxy-m2v57" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:12.458763  229475 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:12.859371  229475 pod_ready.go:94] pod "kube-scheduler-no-preload-734654" is "Ready"
	I1122 00:42:12.859447  229475 pod_ready.go:86] duration metric: took 400.656455ms for pod "kube-scheduler-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:12.859489  229475 pod_ready.go:40] duration metric: took 1.604727938s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:42:12.928731  229475 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:42:12.932110  229475 out.go:179] * Done! kubectl is now configured to use "no-preload-734654" cluster and "default" namespace by default
	I1122 00:42:09.439690  236928 out.go:252] * Restarting existing docker container for "newest-cni-953404" ...
	I1122 00:42:09.439781  236928 cli_runner.go:164] Run: docker start newest-cni-953404
	I1122 00:42:09.764406  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:09.799309  236928 kic.go:430] container "newest-cni-953404" state is running.
	I1122 00:42:09.799848  236928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-953404
	I1122 00:42:09.832642  236928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/config.json ...
	I1122 00:42:09.832888  236928 machine.go:94] provisionDockerMachine start ...
	I1122 00:42:09.833152  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:09.857788  236928 main.go:143] libmachine: Using SSH client type: native
	I1122 00:42:09.858231  236928 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1122 00:42:09.858246  236928 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:42:09.858796  236928 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42398->127.0.0.1:33093: read: connection reset by peer
	I1122 00:42:13.034964  236928 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-953404
	
	I1122 00:42:13.034986  236928 ubuntu.go:182] provisioning hostname "newest-cni-953404"
	I1122 00:42:13.035043  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:13.082568  236928 main.go:143] libmachine: Using SSH client type: native
	I1122 00:42:13.082914  236928 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1122 00:42:13.082926  236928 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-953404 && echo "newest-cni-953404" | sudo tee /etc/hostname
	I1122 00:42:13.260937  236928 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-953404
	
	I1122 00:42:13.261021  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:13.280159  236928 main.go:143] libmachine: Using SSH client type: native
	I1122 00:42:13.280472  236928 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1122 00:42:13.280498  236928 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-953404' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-953404/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-953404' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:42:13.427854  236928 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:42:13.427889  236928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-2332/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-2332/.minikube}
	I1122 00:42:13.427916  236928 ubuntu.go:190] setting up certificates
	I1122 00:42:13.427927  236928 provision.go:84] configureAuth start
	I1122 00:42:13.427996  236928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-953404
	I1122 00:42:13.447278  236928 provision.go:143] copyHostCerts
	I1122 00:42:13.447360  236928 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem, removing ...
	I1122 00:42:13.447382  236928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem
	I1122 00:42:13.447461  236928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem (1078 bytes)
	I1122 00:42:13.447757  236928 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem, removing ...
	I1122 00:42:13.447773  236928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem
	I1122 00:42:13.447830  236928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem (1123 bytes)
	I1122 00:42:13.447950  236928 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem, removing ...
	I1122 00:42:13.447962  236928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem
	I1122 00:42:13.447992  236928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem (1675 bytes)
	I1122 00:42:13.448056  236928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem org=jenkins.newest-cni-953404 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-953404]
	I1122 00:42:13.840746  236928 provision.go:177] copyRemoteCerts
	I1122 00:42:13.840815  236928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:42:13.840860  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:13.861889  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:13.963235  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:42:13.983157  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:42:14.002290  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:42:14.024190  236928 provision.go:87] duration metric: took 596.238809ms to configureAuth
	I1122 00:42:14.024263  236928 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:42:14.024517  236928 config.go:182] Loaded profile config "newest-cni-953404": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:42:14.024533  236928 machine.go:97] duration metric: took 4.191627738s to provisionDockerMachine
	I1122 00:42:14.024542  236928 start.go:293] postStartSetup for "newest-cni-953404" (driver="docker")
	I1122 00:42:14.024552  236928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:42:14.024614  236928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:42:14.024669  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:14.043165  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:14.143482  236928 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:42:14.146978  236928 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:42:14.147007  236928 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:42:14.147019  236928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/addons for local assets ...
	I1122 00:42:14.147071  236928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/files for local assets ...
	I1122 00:42:14.147150  236928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem -> 56232.pem in /etc/ssl/certs
	I1122 00:42:14.147263  236928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:42:14.154798  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:42:14.173955  236928 start.go:296] duration metric: took 149.397354ms for postStartSetup
	I1122 00:42:14.174049  236928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:42:14.174087  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:14.191675  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:14.289323  236928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:42:14.294245  236928 fix.go:56] duration metric: took 4.877059458s for fixHost
	I1122 00:42:14.294273  236928 start.go:83] releasing machines lock for "newest-cni-953404", held for 4.877112825s
	I1122 00:42:14.294348  236928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-953404
	I1122 00:42:14.311794  236928 ssh_runner.go:195] Run: cat /version.json
	I1122 00:42:14.311857  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:14.312127  236928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:42:14.312191  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:14.337390  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:14.347680  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:14.542295  236928 ssh_runner.go:195] Run: systemctl --version
	I1122 00:42:14.549194  236928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:42:14.554185  236928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:42:14.554315  236928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:42:14.563333  236928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:42:14.563413  236928 start.go:496] detecting cgroup driver to use...
	I1122 00:42:14.563476  236928 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:42:14.563604  236928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:42:14.581922  236928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:42:14.597805  236928 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:42:14.597905  236928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:42:14.613882  236928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:42:14.627697  236928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:42:14.754622  236928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:42:14.880516  236928 docker.go:234] disabling docker service ...
	I1122 00:42:14.880663  236928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:42:14.896367  236928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:42:14.911158  236928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:42:15.056656  236928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:42:15.213418  236928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:42:15.227844  236928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:42:15.243131  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:42:15.252426  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:42:15.261942  236928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1122 00:42:15.262010  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1122 00:42:15.271498  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:42:15.281747  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:42:15.291527  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:42:15.302088  236928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:42:15.311530  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:42:15.320697  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:42:15.329960  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:42:15.339450  236928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:42:15.347734  236928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:42:15.355970  236928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:42:15.480115  236928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:42:15.644354  236928 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:42:15.644426  236928 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:42:15.648867  236928 start.go:564] Will wait 60s for crictl version
	I1122 00:42:15.648932  236928 ssh_runner.go:195] Run: which crictl
	I1122 00:42:15.652674  236928 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:42:15.684129  236928 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:42:15.684214  236928 ssh_runner.go:195] Run: containerd --version
	I1122 00:42:15.704501  236928 ssh_runner.go:195] Run: containerd --version
	I1122 00:42:15.733779  236928 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1122 00:42:15.736760  236928 cli_runner.go:164] Run: docker network inspect newest-cni-953404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:42:15.753983  236928 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:42:15.758093  236928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:42:15.772421  236928 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1122 00:42:15.775471  236928 kubeadm.go:884] updating cluster {Name:newest-cni-953404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:42:15.775747  236928 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:42:15.775839  236928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:42:15.810469  236928 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:42:15.810496  236928 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:42:15.810555  236928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:42:15.837530  236928 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:42:15.837615  236928 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:42:15.837631  236928 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1122 00:42:15.837747  236928 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-953404 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:42:15.837817  236928 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:42:15.865968  236928 cni.go:84] Creating CNI manager for ""
	I1122 00:42:15.865995  236928 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:42:15.866019  236928 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1122 00:42:15.866041  236928 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-953404 NodeName:newest-cni-953404 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:42:15.866181  236928 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-953404"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:42:15.866254  236928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:42:15.877540  236928 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:42:15.877610  236928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:42:15.887060  236928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1122 00:42:15.902271  236928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:42:15.916946  236928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1122 00:42:15.931863  236928 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:42:15.936336  236928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:42:15.948430  236928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:42:16.128382  236928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:42:16.166236  236928 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404 for IP: 192.168.76.2
	I1122 00:42:16.166259  236928 certs.go:195] generating shared ca certs ...
	I1122 00:42:16.166275  236928 certs.go:227] acquiring lock for ca certs: {Name:mk348a892ec4309987f6c81ee1acef4884ca62db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:42:16.166510  236928 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key
	I1122 00:42:16.166588  236928 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key
	I1122 00:42:16.166602  236928 certs.go:257] generating profile certs ...
	I1122 00:42:16.166727  236928 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/client.key
	I1122 00:42:16.166847  236928 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/apiserver.key.146c0f14
	I1122 00:42:16.166936  236928 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/proxy-client.key
	I1122 00:42:16.167094  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem (1338 bytes)
	W1122 00:42:16.167142  236928 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623_empty.pem, impossibly tiny 0 bytes
	I1122 00:42:16.167171  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:42:16.167226  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:42:16.167297  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:42:16.167361  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem (1675 bytes)
	I1122 00:42:16.167439  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:42:16.168325  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:42:16.198086  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:42:16.217572  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:42:16.269129  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:42:16.297544  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:42:16.320873  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:42:16.342539  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:42:16.369321  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:42:16.396977  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /usr/share/ca-certificates/56232.pem (1708 bytes)
	I1122 00:42:16.422938  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:42:16.444609  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem --> /usr/share/ca-certificates/5623.pem (1338 bytes)
	I1122 00:42:16.466176  236928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:42:16.497348  236928 ssh_runner.go:195] Run: openssl version
	I1122 00:42:16.504746  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56232.pem && ln -fs /usr/share/ca-certificates/56232.pem /etc/ssl/certs/56232.pem"
	I1122 00:42:16.515693  236928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56232.pem
	I1122 00:42:16.519552  236928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/56232.pem
	I1122 00:42:16.519694  236928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56232.pem
	I1122 00:42:16.564054  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56232.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:42:16.572704  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:42:16.581175  236928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:42:16.585430  236928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:42:16.585521  236928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:42:16.628666  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:42:16.636768  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5623.pem && ln -fs /usr/share/ca-certificates/5623.pem /etc/ssl/certs/5623.pem"
	I1122 00:42:16.646049  236928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5623.pem
	I1122 00:42:16.650199  236928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/5623.pem
	I1122 00:42:16.650339  236928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5623.pem
	I1122 00:42:16.692407  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5623.pem /etc/ssl/certs/51391683.0"
	I1122 00:42:16.700387  236928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:42:16.704248  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:42:16.745460  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:42:16.788624  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:42:16.838463  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:42:16.882311  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:42:16.958637  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:42:17.018402  236928 kubeadm.go:401] StartCluster: {Name:newest-cni-953404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:42:17.018555  236928 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:42:17.018662  236928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:42:17.079681  236928 cri.go:89] found id: "040561f5e693598ddda2a1207b1e7450e001274e60a6a6155b1bddfe4f764632"
	I1122 00:42:17.079757  236928 cri.go:89] found id: "c0576803f36ce2eb1cbdaaa03dc3304cfa7c0e14964ab51d3157094f62e7cef6"
	I1122 00:42:17.079776  236928 cri.go:89] found id: "9569b37164273b550b32e4a4842a6b3487c8dbbfe1bea214d492edde0ea68a04"
	I1122 00:42:17.079792  236928 cri.go:89] found id: "68964f2029378c8753880049e8a138f6d732ad285c7ae266ed075c1534a25aff"
	I1122 00:42:17.079823  236928 cri.go:89] found id: "fddb58d80a874884e3b278956f97291cc577695d419a243e23fb51e1f93cc7f1"
	I1122 00:42:17.079844  236928 cri.go:89] found id: "320d30b6e6043992eff65040cbb828d8621c9e556fe74b8deb765d4fcd67b371"
	I1122 00:42:17.079863  236928 cri.go:89] found id: ""
	I1122 00:42:17.079955  236928 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1122 00:42:17.116873  236928 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f","pid":873,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f/rootfs","created":"2025-11-22T00:42:16.952414551Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-953404_118f80961ff0b26c227409b9cb092e20","io.kubernetes.cri.sandbox-memory":"0","io
.kubernetes.cri.sandbox-name":"etcd-newest-cni-953404","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"118f80961ff0b26c227409b9cb092e20"},"owner":"root"},{"ociVersion":"1.2.1","id":"af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968","pid":925,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968/rootfs","created":"2025-11-22T00:42:17.016293515Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968","io.kubernetes.cri.sandbox-log-direc
tory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-953404_74bca79c7ffdc531f69f5a5a221a97bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-953404","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"74bca79c7ffdc531f69f5a5a221a97bd"},"owner":"root"},{"ociVersion":"1.2.1","id":"b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473","pid":952,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares
":"204","io.kubernetes.cri.sandbox-id":"b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-953404_200368f9345b84e4f3c70e4a4d3c9c77","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-953404","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"200368f9345b84e4f3c70e4a4d3c9c77"},"owner":"root"},{"ociVersion":"1.2.1","id":"bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6","pid":954,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6/rootfs","created":"2025-11-22T00:42:17.072127226Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.
image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-953404_af7b377ee1ea510aa305430d7d26bd6c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-953404","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"af7b377ee1ea510aa305430d7d26bd6c"},"owner":"root"}]
	I1122 00:42:17.117089  236928 cri.go:126] list returned 4 containers
	I1122 00:42:17.117119  236928 cri.go:129] container: {ID:312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f Status:running}
	I1122 00:42:17.117151  236928 cri.go:131] skipping 312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f - not in ps
	I1122 00:42:17.117172  236928 cri.go:129] container: {ID:af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968 Status:running}
	I1122 00:42:17.117206  236928 cri.go:131] skipping af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968 - not in ps
	I1122 00:42:17.117232  236928 cri.go:129] container: {ID:b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473 Status:created}
	I1122 00:42:17.117254  236928 cri.go:131] skipping b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473 - not in ps
	I1122 00:42:17.117272  236928 cri.go:129] container: {ID:bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6 Status:created}
	I1122 00:42:17.117291  236928 cri.go:131] skipping bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6 - not in ps
	I1122 00:42:17.117368  236928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:42:17.131858  236928 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:42:17.131920  236928 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:42:17.132003  236928 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:42:17.148556  236928 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:42:17.149158  236928 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-953404" does not appear in /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:42:17.149432  236928 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-2332/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-953404" cluster setting kubeconfig missing "newest-cni-953404" context setting]
	I1122 00:42:17.149859  236928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:42:17.152680  236928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:42:17.168995  236928 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:42:17.169031  236928 kubeadm.go:602] duration metric: took 37.090512ms to restartPrimaryControlPlane
	I1122 00:42:17.169041  236928 kubeadm.go:403] duration metric: took 150.649637ms to StartCluster
	I1122 00:42:17.169056  236928 settings.go:142] acquiring lock: {Name:mk5b79634916fd13f05f4c848ff3e8b07cafa39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:42:17.169123  236928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:42:17.170062  236928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:42:17.170284  236928 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:42:17.170639  236928 config.go:182] Loaded profile config "newest-cni-953404": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:42:17.170687  236928 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:42:17.170753  236928 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-953404"
	I1122 00:42:17.170769  236928 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-953404"
	W1122 00:42:17.170780  236928 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:42:17.170800  236928 host.go:66] Checking if "newest-cni-953404" exists ...
	I1122 00:42:17.171268  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.171897  236928 addons.go:70] Setting dashboard=true in profile "newest-cni-953404"
	I1122 00:42:17.171915  236928 addons.go:239] Setting addon dashboard=true in "newest-cni-953404"
	W1122 00:42:17.171922  236928 addons.go:248] addon dashboard should already be in state true
	I1122 00:42:17.171945  236928 host.go:66] Checking if "newest-cni-953404" exists ...
	I1122 00:42:17.172378  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.172647  236928 addons.go:70] Setting metrics-server=true in profile "newest-cni-953404"
	I1122 00:42:17.172686  236928 addons.go:239] Setting addon metrics-server=true in "newest-cni-953404"
	W1122 00:42:17.172693  236928 addons.go:248] addon metrics-server should already be in state true
	I1122 00:42:17.172719  236928 host.go:66] Checking if "newest-cni-953404" exists ...
	I1122 00:42:17.173134  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.176149  236928 addons.go:70] Setting default-storageclass=true in profile "newest-cni-953404"
	I1122 00:42:17.176173  236928 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-953404"
	I1122 00:42:17.176478  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.177179  236928 out.go:179] * Verifying Kubernetes components...
	I1122 00:42:17.193351  236928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:42:17.255358  236928 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:42:17.255443  236928 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1122 00:42:17.255460  236928 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:42:17.260887  236928 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:42:17.260953  236928 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1122 00:42:17.260964  236928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1122 00:42:17.261033  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:17.261199  236928 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:42:17.261206  236928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:42:17.261240  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:17.263885  236928 addons.go:239] Setting addon default-storageclass=true in "newest-cni-953404"
	W1122 00:42:17.263907  236928 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:42:17.263931  236928 host.go:66] Checking if "newest-cni-953404" exists ...
	I1122 00:42:17.264341  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.264654  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:42:17.264674  236928 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:42:17.264723  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:17.329394  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:17.336393  236928 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:42:17.336414  236928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:42:17.336470  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:17.339971  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:17.343841  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:17.371991  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:17.507688  236928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:42:17.659628  236928 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:42:17.659754  236928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:42:17.694149  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:42:17.743521  236928 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1122 00:42:17.743603  236928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1122 00:42:17.928202  236928 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1122 00:42:17.928284  236928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1122 00:42:17.930343  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:42:17.939166  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:42:17.939240  236928 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:42:18.001646  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:42:18.001723  236928 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:42:18.034642  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:42:18.034716  236928 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1122 00:42:18.049175  236928 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1122 00:42:18.049265  236928 retry.go:31] will retry after 240.226221ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1122 00:42:18.075196  236928 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:42:18.075221  236928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1122 00:42:18.128742  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:42:18.128761  236928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:42:18.159854  236928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:42:18.173773  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:42:18.236745  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:42:18.236818  236928 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:42:18.290481  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:42:18.330438  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:42:18.330510  236928 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:42:18.423392  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:42:18.423465  236928 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:42:18.531040  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:42:18.531115  236928 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:42:18.700673  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:42:18.700747  236928 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:42:18.737381  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	332b006afd8fc       1611cd07b61d5       8 seconds ago       Running             busybox                   0                   280ef6671af19       busybox                                     default
	03be686d9bf27       66749159455b3       14 seconds ago      Running             storage-provisioner       0                   588e35f6934bf       storage-provisioner                         kube-system
	a2377a9d9ca92       138784d87c9c5       14 seconds ago      Running             coredns                   0                   0f08c4ebd039b       coredns-66bc5c9577-7ddjv                    kube-system
	3311d8b509d77       b1a8c6f707935       26 seconds ago      Running             kindnet-cni               0                   a6265b3e05cd8       kindnet-72xnf                               kube-system
	bb89b752fb8c9       05baa95f5142d       29 seconds ago      Running             kube-proxy                0                   5ec85718e24b2       kube-proxy-m2v57                            kube-system
	2ab595e5f196d       43911e833d64d       48 seconds ago      Running             kube-apiserver            0                   153ce7a50b1b3       kube-apiserver-no-preload-734654            kube-system
	e638f47ef1c96       a1894772a478e       48 seconds ago      Running             etcd                      0                   681920238f189       etcd-no-preload-734654                      kube-system
	90ef3180c03a0       7eb2c6ff0c5a7       48 seconds ago      Running             kube-controller-manager   0                   6fff6b589eb64       kube-controller-manager-no-preload-734654   kube-system
	eaaf10be34890       b5f57ec6b9867       48 seconds ago      Running             kube-scheduler            0                   0cb5fc9e7bb0b       kube-scheduler-no-preload-734654            kube-system
	
	
	==> containerd <==
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.389432802Z" level=info msg="connecting to shim 588e35f6934bf424ea37b5aa19e1dd25628cee1f02396ad27d96a3437d9434b1" address="unix:///run/containerd/s/37ec126b77ad1a04f8a1db3ca8ed97bd0dd01f0a0f9bdc028fe532dab4768b85" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.515793994Z" level=info msg="StartContainer for \"a2377a9d9ca9276f76a49240cc0701616e2930409d19ee65fbd80752f8f71ffa\" returns successfully"
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.621205942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:62e9fefe-8213-4869-badc-8ff66248f8fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"588e35f6934bf424ea37b5aa19e1dd25628cee1f02396ad27d96a3437d9434b1\""
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.631967242Z" level=info msg="CreateContainer within sandbox \"588e35f6934bf424ea37b5aa19e1dd25628cee1f02396ad27d96a3437d9434b1\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.646107246Z" level=info msg="Container 03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.661615949Z" level=info msg="CreateContainer within sandbox \"588e35f6934bf424ea37b5aa19e1dd25628cee1f02396ad27d96a3437d9434b1\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b\""
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.664082033Z" level=info msg="StartContainer for \"03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b\""
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.665739360Z" level=info msg="connecting to shim 03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b" address="unix:///run/containerd/s/37ec126b77ad1a04f8a1db3ca8ed97bd0dd01f0a0f9bdc028fe532dab4768b85" protocol=ttrpc version=3
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.757768295Z" level=info msg="StartContainer for \"03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b\" returns successfully"
	Nov 22 00:42:13 no-preload-734654 containerd[758]: time="2025-11-22T00:42:13.489007173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0,Namespace:default,Attempt:0,}"
	Nov 22 00:42:13 no-preload-734654 containerd[758]: time="2025-11-22T00:42:13.596202037Z" level=info msg="connecting to shim 280ef6671af19befe08c3abbc381d1a0117ffcaec8d092cbad21720ae554d4ea" address="unix:///run/containerd/s/8a6c35978fbe912d4c674dfc7fcc2ecf2221e1fa92a4e5102640e1590b092ed9" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:42:13 no-preload-734654 containerd[758]: time="2025-11-22T00:42:13.749477326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0,Namespace:default,Attempt:0,} returns sandbox id \"280ef6671af19befe08c3abbc381d1a0117ffcaec8d092cbad21720ae554d4ea\""
	Nov 22 00:42:13 no-preload-734654 containerd[758]: time="2025-11-22T00:42:13.755282843Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.076956708Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.080401028Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.082512630Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.086607954Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.088106124Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.332615188s"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.088258167Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.094739320Z" level=info msg="CreateContainer within sandbox \"280ef6671af19befe08c3abbc381d1a0117ffcaec8d092cbad21720ae554d4ea\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.110971725Z" level=info msg="Container 332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.121577765Z" level=info msg="CreateContainer within sandbox \"280ef6671af19befe08c3abbc381d1a0117ffcaec8d092cbad21720ae554d4ea\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b\""
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.125222565Z" level=info msg="StartContainer for \"332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b\""
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.128383173Z" level=info msg="connecting to shim 332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b" address="unix:///run/containerd/s/8a6c35978fbe912d4c674dfc7fcc2ecf2221e1fa92a4e5102640e1590b092ed9" protocol=ttrpc version=3
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.284701250Z" level=info msg="StartContainer for \"332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b\" returns successfully"
	
	
	==> coredns [a2377a9d9ca9276f76a49240cc0701616e2930409d19ee65fbd80752f8f71ffa] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38189 - 9040 "HINFO IN 8651196616265383532.1837598247671117162. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021609404s
	
	
	==> describe nodes <==
	Name:               no-preload-734654
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-734654
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-734654
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:41:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-734654
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:42:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:42:21 +0000   Sat, 22 Nov 2025 00:41:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:42:21 +0000   Sat, 22 Nov 2025 00:41:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:42:21 +0000   Sat, 22 Nov 2025 00:41:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:42:21 +0000   Sat, 22 Nov 2025 00:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-734654
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                2d98f259-ac3c-4a59-b21a-68b0575348bc
	  Boot ID:                    4e86741a-5896-4eb6-97ce-70ea8beedc67
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-7ddjv                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     30s
	  kube-system                 etcd-no-preload-734654                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         35s
	  kube-system                 kindnet-72xnf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      31s
	  kube-system                 kube-apiserver-no-preload-734654             250m (12%)    0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-no-preload-734654    200m (10%)    0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-proxy-m2v57                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-scheduler-no-preload-734654             100m (5%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 28s                kube-proxy       
	  Normal   NodeAllocatableEnforced  50s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 50s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  50s (x8 over 50s)  kubelet          Node no-preload-734654 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    50s (x8 over 50s)  kubelet          Node no-preload-734654 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     50s (x7 over 50s)  kubelet          Node no-preload-734654 status is now: NodeHasSufficientPID
	  Normal   Starting                 50s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 35s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 35s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  35s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  35s                kubelet          Node no-preload-734654 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    35s                kubelet          Node no-preload-734654 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     35s                kubelet          Node no-preload-734654 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           32s                node-controller  Node no-preload-734654 event: Registered Node no-preload-734654 in Controller
	  Normal   NodeReady                16s                kubelet          Node no-preload-734654 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.017121] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498034] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037542] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808656] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.648915] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 23:58] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=00000000f9ea0775{9P.session} n=0000000035823f74
	[  +0.001177] FS-Cache: O-key=[10] '34323935353131333738'
	[  +0.000819] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000f9ea0775{9P.session} n=00000000dbfd8515
	[  +0.001154] FS-Cache: N-key=[10] '34323935353131333738'
	[Nov22 00:00] hrtimer: interrupt took 9958927 ns
	
	
	==> etcd [e638f47ef1c96a374c96f40ec7088eb9c066047ef70a224e9f97e7ff919069b4] <==
	{"level":"warn","ts":"2025-11-22T00:41:41.964388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.023284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.076292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.112204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.148741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.207988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.239938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.280106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.332107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.361124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.427411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.447386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.501262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.538899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.587654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.650386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.685563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.740413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.768685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.819758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.860668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.907995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.952940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.994938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:43.275841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35480","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:42:25 up  1:24,  0 user,  load average: 4.74, 3.75, 3.08
	Linux no-preload-734654 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3311d8b509d77a934414dc4e7df24b222e0d39fbc0b723442a8e5e7c930a8ef4] <==
	I1122 00:41:59.313687       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:41:59.313989       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:41:59.314179       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:41:59.314200       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:41:59.314210       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:41:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:41:59.519946       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:41:59.520202       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:41:59.520286       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:41:59.616699       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:41:59.720477       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:41:59.720566       1 metrics.go:72] Registering metrics
	I1122 00:41:59.720672       1 controller.go:711] "Syncing nftables rules"
	I1122 00:42:09.522686       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:42:09.522725       1 main.go:301] handling current node
	I1122 00:42:19.515805       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:42:19.516051       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2ab595e5f196dc17ecf5b0fc15d3903663047a05453ec4e7edbc28055be790fa] <==
	I1122 00:41:45.869298       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:41:45.869304       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:41:45.869309       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:41:45.871607       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:41:45.871664       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:41:45.889701       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:41:45.906795       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:41:46.086892       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:41:46.113967       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:41:46.120563       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:41:47.989605       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:41:48.121810       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:41:48.244361       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:41:48.257309       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:41:48.258772       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:41:48.268313       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:41:48.781176       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:41:50.250118       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:41:50.279528       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:41:50.300123       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:41:54.531903       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:41:54.559290       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:41:54.573178       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:41:54.981847       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1122 00:42:23.380743       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:52956: use of closed network connection
	
	
	==> kube-controller-manager [90ef3180c03a0166582d94e9b36cee4461ebe824827b4d18bae507872b9e520f] <==
	I1122 00:41:53.852723       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:41:53.857947       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:41:53.858291       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:41:53.861513       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:41:53.861678       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:41:53.861877       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:41:53.862036       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:41:53.862141       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:41:53.862155       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:41:53.862561       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:41:53.866885       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:41:53.867478       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:41:53.867505       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:41:53.870299       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:41:53.870839       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:41:53.871020       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:41:53.871193       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:41:53.873243       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:41:53.874499       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:41:53.879662       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:41:53.890304       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:41:53.914811       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:41:53.914993       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:41:53.915068       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:42:13.861991       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bb89b752fb8c9bd33331d89cd18385d39b6b8c698e4464bf4aa55cfad64548e7] <==
	I1122 00:41:56.134602       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:41:56.229724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:41:56.333363       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:41:56.333401       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:41:56.333485       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:41:56.462677       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:41:56.462762       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:41:56.468347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:41:56.468789       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:41:56.468805       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:41:56.473912       1 config.go:200] "Starting service config controller"
	I1122 00:41:56.473930       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:41:56.473951       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:41:56.473955       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:41:56.473967       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:41:56.473970       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:41:56.481426       1 config.go:309] "Starting node config controller"
	I1122 00:41:56.481462       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:41:56.481472       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:41:56.574757       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:41:56.574789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:41:56.574830       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eaaf10be348904a85c908c1eafcae30b2dfe6ee0280a7611486f32fecb5800a3] <==
	I1122 00:41:46.030965       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:41:49.366453       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:41:49.366548       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:41:49.371489       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:41:49.371910       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:41:49.371986       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:41:49.372058       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:41:49.382672       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:41:49.382821       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:41:49.383006       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:41:49.383047       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:41:49.472811       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1122 00:41:49.483903       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:41:49.484729       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:41:51 no-preload-734654 kubelet[2106]: I1122 00:41:51.579132    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-734654" podStartSLOduration=5.579111109 podStartE2EDuration="5.579111109s" podCreationTimestamp="2025-11-22 00:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:51.555043317 +0000 UTC m=+1.369998709" watchObservedRunningTime="2025-11-22 00:41:51.579111109 +0000 UTC m=+1.394066493"
	Nov 22 00:41:51 no-preload-734654 kubelet[2106]: I1122 00:41:51.600905    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-734654" podStartSLOduration=1.6008858 podStartE2EDuration="1.6008858s" podCreationTimestamp="2025-11-22 00:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:51.579513395 +0000 UTC m=+1.394468779" watchObservedRunningTime="2025-11-22 00:41:51.6008858 +0000 UTC m=+1.415841184"
	Nov 22 00:41:51 no-preload-734654 kubelet[2106]: I1122 00:41:51.628034    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-734654" podStartSLOduration=1.628014736 podStartE2EDuration="1.628014736s" podCreationTimestamp="2025-11-22 00:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:51.601183338 +0000 UTC m=+1.416138730" watchObservedRunningTime="2025-11-22 00:41:51.628014736 +0000 UTC m=+1.442970128"
	Nov 22 00:41:51 no-preload-734654 kubelet[2106]: I1122 00:41:51.649718    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-734654" podStartSLOduration=1.649698275 podStartE2EDuration="1.649698275s" podCreationTimestamp="2025-11-22 00:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:51.628392112 +0000 UTC m=+1.443347496" watchObservedRunningTime="2025-11-22 00:41:51.649698275 +0000 UTC m=+1.464653659"
	Nov 22 00:41:53 no-preload-734654 kubelet[2106]: I1122 00:41:53.911994    2106 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:41:53 no-preload-734654 kubelet[2106]: I1122 00:41:53.913751    2106 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919151    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eed2027d-917a-415e-ad0e-2c5496b01040-lib-modules\") pod \"kindnet-72xnf\" (UID: \"eed2027d-917a-415e-ad0e-2c5496b01040\") " pod="kube-system/kindnet-72xnf"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919204    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e6d1b10-56de-4646-b8d8-9ca98489dca8-lib-modules\") pod \"kube-proxy-m2v57\" (UID: \"7e6d1b10-56de-4646-b8d8-9ca98489dca8\") " pod="kube-system/kube-proxy-m2v57"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919225    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97q9\" (UniqueName: \"kubernetes.io/projected/7e6d1b10-56de-4646-b8d8-9ca98489dca8-kube-api-access-s97q9\") pod \"kube-proxy-m2v57\" (UID: \"7e6d1b10-56de-4646-b8d8-9ca98489dca8\") " pod="kube-system/kube-proxy-m2v57"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919255    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e6d1b10-56de-4646-b8d8-9ca98489dca8-xtables-lock\") pod \"kube-proxy-m2v57\" (UID: \"7e6d1b10-56de-4646-b8d8-9ca98489dca8\") " pod="kube-system/kube-proxy-m2v57"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919279    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eed2027d-917a-415e-ad0e-2c5496b01040-cni-cfg\") pod \"kindnet-72xnf\" (UID: \"eed2027d-917a-415e-ad0e-2c5496b01040\") " pod="kube-system/kindnet-72xnf"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919297    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eed2027d-917a-415e-ad0e-2c5496b01040-xtables-lock\") pod \"kindnet-72xnf\" (UID: \"eed2027d-917a-415e-ad0e-2c5496b01040\") " pod="kube-system/kindnet-72xnf"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919313    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dj52\" (UniqueName: \"kubernetes.io/projected/eed2027d-917a-415e-ad0e-2c5496b01040-kube-api-access-6dj52\") pod \"kindnet-72xnf\" (UID: \"eed2027d-917a-415e-ad0e-2c5496b01040\") " pod="kube-system/kindnet-72xnf"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919342    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e6d1b10-56de-4646-b8d8-9ca98489dca8-kube-proxy\") pod \"kube-proxy-m2v57\" (UID: \"7e6d1b10-56de-4646-b8d8-9ca98489dca8\") " pod="kube-system/kube-proxy-m2v57"
	Nov 22 00:41:55 no-preload-734654 kubelet[2106]: I1122 00:41:55.111751    2106 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:41:56 no-preload-734654 kubelet[2106]: I1122 00:41:56.872722    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m2v57" podStartSLOduration=2.8727033349999997 podStartE2EDuration="2.872703335s" podCreationTimestamp="2025-11-22 00:41:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:56.872438601 +0000 UTC m=+6.687393993" watchObservedRunningTime="2025-11-22 00:41:56.872703335 +0000 UTC m=+6.687658760"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.616690    2106 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.660771    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-72xnf" podStartSLOduration=12.567362305 podStartE2EDuration="15.660728742s" podCreationTimestamp="2025-11-22 00:41:54 +0000 UTC" firstStartedPulling="2025-11-22 00:41:55.797081334 +0000 UTC m=+5.612036718" lastFinishedPulling="2025-11-22 00:41:58.890447771 +0000 UTC m=+8.705403155" observedRunningTime="2025-11-22 00:41:59.863660294 +0000 UTC m=+9.678615686" watchObservedRunningTime="2025-11-22 00:42:09.660728742 +0000 UTC m=+19.475684142"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.777110    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03d0ec5c-3721-4533-9cb8-f5210335d7a6-config-volume\") pod \"coredns-66bc5c9577-7ddjv\" (UID: \"03d0ec5c-3721-4533-9cb8-f5210335d7a6\") " pod="kube-system/coredns-66bc5c9577-7ddjv"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.777319    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67lxt\" (UniqueName: \"kubernetes.io/projected/03d0ec5c-3721-4533-9cb8-f5210335d7a6-kube-api-access-67lxt\") pod \"coredns-66bc5c9577-7ddjv\" (UID: \"03d0ec5c-3721-4533-9cb8-f5210335d7a6\") " pod="kube-system/coredns-66bc5c9577-7ddjv"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.878152    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/62e9fefe-8213-4869-badc-8ff66248f8fa-tmp\") pod \"storage-provisioner\" (UID: \"62e9fefe-8213-4869-badc-8ff66248f8fa\") " pod="kube-system/storage-provisioner"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.878346    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nq5b\" (UniqueName: \"kubernetes.io/projected/62e9fefe-8213-4869-badc-8ff66248f8fa-kube-api-access-5nq5b\") pod \"storage-provisioner\" (UID: \"62e9fefe-8213-4869-badc-8ff66248f8fa\") " pod="kube-system/storage-provisioner"
	Nov 22 00:42:10 no-preload-734654 kubelet[2106]: I1122 00:42:10.867482    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.867451052 podStartE2EDuration="14.867451052s" podCreationTimestamp="2025-11-22 00:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:42:10.867274869 +0000 UTC m=+20.682230253" watchObservedRunningTime="2025-11-22 00:42:10.867451052 +0000 UTC m=+20.682406436"
	Nov 22 00:42:13 no-preload-734654 kubelet[2106]: I1122 00:42:13.174967    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7ddjv" podStartSLOduration=18.174946892 podStartE2EDuration="18.174946892s" podCreationTimestamp="2025-11-22 00:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:42:10.900674691 +0000 UTC m=+20.715630091" watchObservedRunningTime="2025-11-22 00:42:13.174946892 +0000 UTC m=+22.989902292"
	Nov 22 00:42:13 no-preload-734654 kubelet[2106]: I1122 00:42:13.215162    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lc2s\" (UniqueName: \"kubernetes.io/projected/3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0-kube-api-access-8lc2s\") pod \"busybox\" (UID: \"3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0\") " pod="default/busybox"
	
	
	==> storage-provisioner [03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b] <==
	I1122 00:42:10.750870       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1122 00:42:10.773420       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1122 00:42:10.773819       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:42:10.776330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:10.785727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:42:10.786043       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:42:10.786341       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-734654_8075120c-2162-4a78-a265-bb3566c525f1!
	I1122 00:42:10.786459       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd06f4be-ec1a-459d-89a0-7f78a8051ba5", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-734654_8075120c-2162-4a78-a265-bb3566c525f1 became leader
	W1122 00:42:10.796523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:10.804397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:42:10.886776       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-734654_8075120c-2162-4a78-a265-bb3566c525f1!
	W1122 00:42:12.808289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:12.813654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:14.817698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:14.826142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:16.829176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:16.836895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:18.840379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:18.845234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:20.848418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:20.856115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:22.860563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:22.867777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:24.875698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:24.889139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-734654 -n no-preload-734654
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-734654 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-734654
helpers_test.go:243: (dbg) docker inspect no-preload-734654:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45",
	        "Created": "2025-11-22T00:41:00.477134155Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229777,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-22T00:41:00.578746991Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:97061622515a85470dad13cb046ad5fc5021fcd2b05aa921a620abb52951cd5d",
	        "ResolvConfPath": "/var/lib/docker/containers/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45/hostname",
	        "HostsPath": "/var/lib/docker/containers/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45/hosts",
	        "LogPath": "/var/lib/docker/containers/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45/b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45-json.log",
	        "Name": "/no-preload-734654",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-734654:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-734654",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "b1ccdb27c213fc9db537fc39de0117820edccf9d6b17ac006b29b71a24473e45",
	                "LowerDir": "/var/lib/docker/overlay2/e031e4fb56e6924d708bf512ea3967c95e3578b6219afae593a0888cc6506ce5-init/diff:/var/lib/docker/overlay2/7cce95e9587a813ce5f3ee5f28c6de3b78ed608010774b6d981aecaad739a571/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e031e4fb56e6924d708bf512ea3967c95e3578b6219afae593a0888cc6506ce5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e031e4fb56e6924d708bf512ea3967c95e3578b6219afae593a0888cc6506ce5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e031e4fb56e6924d708bf512ea3967c95e3578b6219afae593a0888cc6506ce5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-734654",
	                "Source": "/var/lib/docker/volumes/no-preload-734654/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-734654",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-734654",
	                "name.minikube.sigs.k8s.io": "no-preload-734654",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5c3472e65b21ddadf93ac73507f39b1ff5447afb97867a9e3177480e5290a239",
	            "SandboxKey": "/var/run/docker/netns/5c3472e65b21",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33084"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33086"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-734654": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:66:c2:ee:21:b2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "321d9f6a63d209046b671cd2254c246e2830ed82acde6626fbf626526ff0f2e7",
	                    "EndpointID": "29bff7b62d2706ee730f5affdc9a972c11c0a504233bc26c3ccbf75e38c8fcf5",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-734654",
	                        "b1ccdb27c213"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-734654 -n no-preload-734654
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-734654 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-734654 logs -n 25: (1.681432749s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────
─────────────┐
	│ COMMAND │                                                                                                                        ARGS                                                                                                                         │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────
─────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-080784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ stop    │ -p default-k8s-diff-port-080784 --alsologtostderr -v=3                                                                                                                                                                                              │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-080784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ start   │ -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:40 UTC │
	│ addons  │ enable metrics-server -p embed-certs-540723 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                            │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:39 UTC │
	│ stop    │ -p embed-certs-540723 --alsologtostderr -v=3                                                                                                                                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:39 UTC │ 22 Nov 25 00:40 UTC │
	│ addons  │ enable dashboard -p embed-certs-540723 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                       │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ start   │ -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:41 UTC │
	│ image   │ default-k8s-diff-port-080784 image list --format=json                                                                                                                                                                                               │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ pause   │ -p default-k8s-diff-port-080784 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ unpause │ -p default-k8s-diff-port-080784 --alsologtostderr -v=1                                                                                                                                                                                              │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ delete  │ -p default-k8s-diff-port-080784                                                                                                                                                                                                                     │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ delete  │ -p default-k8s-diff-port-080784                                                                                                                                                                                                                     │ default-k8s-diff-port-080784 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ delete  │ -p disable-driver-mounts-577767                                                                                                                                                                                                                     │ disable-driver-mounts-577767 │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:40 UTC │
	│ start   │ -p no-preload-734654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1                                                                                       │ no-preload-734654            │ jenkins │ v1.37.0 │ 22 Nov 25 00:40 UTC │ 22 Nov 25 00:42 UTC │
	│ image   │ embed-certs-540723 image list --format=json                                                                                                                                                                                                         │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ pause   │ -p embed-certs-540723 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ unpause │ -p embed-certs-540723 --alsologtostderr -v=1                                                                                                                                                                                                        │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ delete  │ -p embed-certs-540723                                                                                                                                                                                                                               │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ delete  │ -p embed-certs-540723                                                                                                                                                                                                                               │ embed-certs-540723           │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:41 UTC │
	│ start   │ -p newest-cni-953404 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:41 UTC │ 22 Nov 25 00:42 UTC │
	│ addons  │ enable metrics-server -p newest-cni-953404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                             │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │ 22 Nov 25 00:42 UTC │
	│ stop    │ -p newest-cni-953404 --alsologtostderr -v=3                                                                                                                                                                                                         │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │ 22 Nov 25 00:42 UTC │
	│ addons  │ enable dashboard -p newest-cni-953404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                        │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │ 22 Nov 25 00:42 UTC │
	│ start   │ -p newest-cni-953404 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1 │ newest-cni-953404            │ jenkins │ v1.37.0 │ 22 Nov 25 00:42 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────
─────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/22 00:42:09
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1122 00:42:09.185358  236928 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:42:09.185488  236928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:09.185499  236928 out.go:374] Setting ErrFile to fd 2...
	I1122 00:42:09.185505  236928 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:42:09.185768  236928 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:42:09.186143  236928 out.go:368] Setting JSON to false
	I1122 00:42:09.187070  236928 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":5067,"bootTime":1763767063,"procs":199,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1122 00:42:09.187139  236928 start.go:143] virtualization:  
	I1122 00:42:09.190252  236928 out.go:179] * [newest-cni-953404] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:42:09.194157  236928 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:42:09.194332  236928 notify.go:221] Checking for updates...
	I1122 00:42:09.200126  236928 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:42:09.203033  236928 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:42:09.205978  236928 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1122 00:42:09.208817  236928 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:42:09.211849  236928 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:42:09.215257  236928 config.go:182] Loaded profile config "newest-cni-953404": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:42:09.215973  236928 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:42:09.241253  236928 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:42:09.241357  236928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:42:09.317184  236928 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:42:09.305724865 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:42:09.317293  236928 docker.go:319] overlay module found
	I1122 00:42:09.320532  236928 out.go:179] * Using the docker driver based on existing profile
	I1122 00:42:09.323510  236928 start.go:309] selected driver: docker
	I1122 00:42:09.323545  236928 start.go:930] validating driver "docker" against &{Name:newest-cni-953404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested
:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:42:09.323779  236928 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:42:09.324507  236928 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:42:09.383356  236928 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:42:09.373480492 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:42:09.383784  236928 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1122 00:42:09.383822  236928 cni.go:84] Creating CNI manager for ""
	I1122 00:42:09.383881  236928 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:42:09.383927  236928 start.go:353] cluster config:
	{Name:newest-cni-953404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000
.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:42:09.387050  236928 out.go:179] * Starting "newest-cni-953404" primary control-plane node in "newest-cni-953404" cluster
	I1122 00:42:09.389764  236928 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1122 00:42:09.392732  236928 out.go:179] * Pulling base image v0.0.48-1763588073-21934 ...
	I1122 00:42:09.395723  236928 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:42:09.395770  236928 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1122 00:42:09.395781  236928 cache.go:65] Caching tarball of preloaded images
	I1122 00:42:09.395816  236928 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1122 00:42:09.395883  236928 preload.go:238] Found /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 in cache, skipping download
	I1122 00:42:09.395893  236928 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1122 00:42:09.396008  236928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/config.json ...
	I1122 00:42:09.417021  236928 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon, skipping pull
	I1122 00:42:09.417046  236928 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in daemon, skipping load
	I1122 00:42:09.417061  236928 cache.go:243] Successfully downloaded all kic artifacts
	I1122 00:42:09.417084  236928 start.go:360] acquireMachinesLock for newest-cni-953404: {Name:mk9f77ab0cb88bc744c03e61f3cd82397d16e4c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1122 00:42:09.417146  236928 start.go:364] duration metric: took 37.867µs to acquireMachinesLock for "newest-cni-953404"
	I1122 00:42:09.417170  236928 start.go:96] Skipping create...Using existing machine configuration
	I1122 00:42:09.417179  236928 fix.go:54] fixHost starting: 
	I1122 00:42:09.417442  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:09.436454  236928 fix.go:112] recreateIfNeeded on newest-cni-953404: state=Stopped err=<nil>
	W1122 00:42:09.436484  236928 fix.go:138] unexpected machine state, will restart: <nil>
	I1122 00:42:09.756241  229475 node_ready.go:49] node "no-preload-734654" is "Ready"
	I1122 00:42:09.756268  229475 node_ready.go:38] duration metric: took 13.004360993s for node "no-preload-734654" to be "Ready" ...
	I1122 00:42:09.756282  229475 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:42:09.756337  229475 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:42:09.788658  229475 api_server.go:72] duration metric: took 15.555756899s to wait for apiserver process to appear ...
	I1122 00:42:09.788682  229475 api_server.go:88] waiting for apiserver healthz status ...
	I1122 00:42:09.788712  229475 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1122 00:42:09.799772  229475 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1122 00:42:09.801452  229475 api_server.go:141] control plane version: v1.34.1
	I1122 00:42:09.801480  229475 api_server.go:131] duration metric: took 12.791334ms to wait for apiserver health ...
	I1122 00:42:09.801490  229475 system_pods.go:43] waiting for kube-system pods to appear ...
	I1122 00:42:09.807973  229475 system_pods.go:59] 8 kube-system pods found
	I1122 00:42:09.808005  229475 system_pods.go:61] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:09.808011  229475 system_pods.go:61] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:09.808017  229475 system_pods.go:61] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:09.808021  229475 system_pods.go:61] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:09.808025  229475 system_pods.go:61] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:09.808029  229475 system_pods.go:61] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:09.808033  229475 system_pods.go:61] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:09.808038  229475 system_pods.go:61] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:09.808045  229475 system_pods.go:74] duration metric: took 6.548403ms to wait for pod list to return data ...
	I1122 00:42:09.808053  229475 default_sa.go:34] waiting for default service account to be created ...
	I1122 00:42:09.814305  229475 default_sa.go:45] found service account: "default"
	I1122 00:42:09.814336  229475 default_sa.go:55] duration metric: took 6.27672ms for default service account to be created ...
	I1122 00:42:09.814469  229475 system_pods.go:116] waiting for k8s-apps to be running ...
	I1122 00:42:09.819778  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:09.819813  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:09.819820  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:09.819826  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:09.819830  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:09.819835  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:09.819839  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:09.819843  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:09.819849  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:09.819879  229475 retry.go:31] will retry after 276.611466ms: missing components: kube-dns
	I1122 00:42:10.104870  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:10.104905  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:10.104914  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:10.104920  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:10.104925  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:10.104931  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:10.104940  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:10.104945  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:10.104957  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:10.104975  229475 retry.go:31] will retry after 237.799926ms: missing components: kube-dns
	I1122 00:42:10.351220  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:10.351285  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:10.351293  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:10.351299  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:10.351303  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:10.351307  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:10.351310  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:10.351314  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:10.351319  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:10.351333  229475 retry.go:31] will retry after 343.711479ms: missing components: kube-dns
	I1122 00:42:10.701981  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:10.702013  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1122 00:42:10.702020  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:10.702026  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:10.702031  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:10.702036  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:10.702040  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:10.702044  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:10.702050  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1122 00:42:10.702065  229475 retry.go:31] will retry after 527.354094ms: missing components: kube-dns
	I1122 00:42:11.233280  229475 system_pods.go:86] 8 kube-system pods found
	I1122 00:42:11.233316  229475 system_pods.go:89] "coredns-66bc5c9577-7ddjv" [03d0ec5c-3721-4533-9cb8-f5210335d7a6] Running
	I1122 00:42:11.233324  229475 system_pods.go:89] "etcd-no-preload-734654" [dad2476b-7fb5-448a-be9e-cabeca5bc77a] Running
	I1122 00:42:11.233329  229475 system_pods.go:89] "kindnet-72xnf" [eed2027d-917a-415e-ad0e-2c5496b01040] Running
	I1122 00:42:11.233334  229475 system_pods.go:89] "kube-apiserver-no-preload-734654" [ad0640f1-8f9f-4811-a887-f901dea298fc] Running
	I1122 00:42:11.233339  229475 system_pods.go:89] "kube-controller-manager-no-preload-734654" [221e7c30-2736-4b1e-a340-68fb81753574] Running
	I1122 00:42:11.233343  229475 system_pods.go:89] "kube-proxy-m2v57" [7e6d1b10-56de-4646-b8d8-9ca98489dca8] Running
	I1122 00:42:11.233397  229475 system_pods.go:89] "kube-scheduler-no-preload-734654" [74e6ed7e-dd65-4e36-a404-098cecfddc8a] Running
	I1122 00:42:11.233413  229475 system_pods.go:89] "storage-provisioner" [62e9fefe-8213-4869-badc-8ff66248f8fa] Running
	I1122 00:42:11.233421  229475 system_pods.go:126] duration metric: took 1.418939899s to wait for k8s-apps to be running ...
	I1122 00:42:11.233429  229475 system_svc.go:44] waiting for kubelet service to be running ....
	I1122 00:42:11.233515  229475 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:42:11.247292  229475 system_svc.go:56] duration metric: took 13.852935ms WaitForService to wait for kubelet
	I1122 00:42:11.247320  229475 kubeadm.go:587] duration metric: took 17.014423776s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1122 00:42:11.247339  229475 node_conditions.go:102] verifying NodePressure condition ...
	I1122 00:42:11.250285  229475 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1122 00:42:11.250319  229475 node_conditions.go:123] node cpu capacity is 2
	I1122 00:42:11.250334  229475 node_conditions.go:105] duration metric: took 2.989512ms to run NodePressure ...
	I1122 00:42:11.250347  229475 start.go:242] waiting for startup goroutines ...
	I1122 00:42:11.250355  229475 start.go:247] waiting for cluster config update ...
	I1122 00:42:11.250366  229475 start.go:256] writing updated cluster config ...
	I1122 00:42:11.250692  229475 ssh_runner.go:195] Run: rm -f paused
	I1122 00:42:11.254727  229475 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:42:11.258170  229475 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-7ddjv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.262696  229475 pod_ready.go:94] pod "coredns-66bc5c9577-7ddjv" is "Ready"
	I1122 00:42:11.262726  229475 pod_ready.go:86] duration metric: took 4.529135ms for pod "coredns-66bc5c9577-7ddjv" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.265246  229475 pod_ready.go:83] waiting for pod "etcd-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.269680  229475 pod_ready.go:94] pod "etcd-no-preload-734654" is "Ready"
	I1122 00:42:11.269706  229475 pod_ready.go:86] duration metric: took 4.434152ms for pod "etcd-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.272011  229475 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.276732  229475 pod_ready.go:94] pod "kube-apiserver-no-preload-734654" is "Ready"
	I1122 00:42:11.276757  229475 pod_ready.go:86] duration metric: took 4.719194ms for pod "kube-apiserver-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.279214  229475 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.659228  229475 pod_ready.go:94] pod "kube-controller-manager-no-preload-734654" is "Ready"
	I1122 00:42:11.659266  229475 pod_ready.go:86] duration metric: took 380.026823ms for pod "kube-controller-manager-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:11.860500  229475 pod_ready.go:83] waiting for pod "kube-proxy-m2v57" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:12.258453  229475 pod_ready.go:94] pod "kube-proxy-m2v57" is "Ready"
	I1122 00:42:12.258484  229475 pod_ready.go:86] duration metric: took 397.956677ms for pod "kube-proxy-m2v57" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:12.458763  229475 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:12.859371  229475 pod_ready.go:94] pod "kube-scheduler-no-preload-734654" is "Ready"
	I1122 00:42:12.859447  229475 pod_ready.go:86] duration metric: took 400.656455ms for pod "kube-scheduler-no-preload-734654" in "kube-system" namespace to be "Ready" or be gone ...
	I1122 00:42:12.859489  229475 pod_ready.go:40] duration metric: took 1.604727938s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1122 00:42:12.928731  229475 start.go:628] kubectl: 1.33.2, cluster: 1.34.1 (minor skew: 1)
	I1122 00:42:12.932110  229475 out.go:179] * Done! kubectl is now configured to use "no-preload-734654" cluster and "default" namespace by default
	I1122 00:42:09.439690  236928 out.go:252] * Restarting existing docker container for "newest-cni-953404" ...
	I1122 00:42:09.439781  236928 cli_runner.go:164] Run: docker start newest-cni-953404
	I1122 00:42:09.764406  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:09.799309  236928 kic.go:430] container "newest-cni-953404" state is running.
	I1122 00:42:09.799848  236928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-953404
	I1122 00:42:09.832642  236928 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/config.json ...
	I1122 00:42:09.832888  236928 machine.go:94] provisionDockerMachine start ...
	I1122 00:42:09.833152  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:09.857788  236928 main.go:143] libmachine: Using SSH client type: native
	I1122 00:42:09.858231  236928 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1122 00:42:09.858246  236928 main.go:143] libmachine: About to run SSH command:
	hostname
	I1122 00:42:09.858796  236928 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:42398->127.0.0.1:33093: read: connection reset by peer
	I1122 00:42:13.034964  236928 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-953404
	
	I1122 00:42:13.034986  236928 ubuntu.go:182] provisioning hostname "newest-cni-953404"
	I1122 00:42:13.035043  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:13.082568  236928 main.go:143] libmachine: Using SSH client type: native
	I1122 00:42:13.082914  236928 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1122 00:42:13.082926  236928 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-953404 && echo "newest-cni-953404" | sudo tee /etc/hostname
	I1122 00:42:13.260937  236928 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-953404
	
	I1122 00:42:13.261021  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:13.280159  236928 main.go:143] libmachine: Using SSH client type: native
	I1122 00:42:13.280472  236928 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x3dad70] 0x3dd270 <nil>  [] 0s} 127.0.0.1 33093 <nil> <nil>}
	I1122 00:42:13.280498  236928 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-953404' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-953404/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-953404' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1122 00:42:13.427854  236928 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1122 00:42:13.427889  236928 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21934-2332/.minikube CaCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21934-2332/.minikube}
	I1122 00:42:13.427916  236928 ubuntu.go:190] setting up certificates
	I1122 00:42:13.427927  236928 provision.go:84] configureAuth start
	I1122 00:42:13.427996  236928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-953404
	I1122 00:42:13.447278  236928 provision.go:143] copyHostCerts
	I1122 00:42:13.447360  236928 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem, removing ...
	I1122 00:42:13.447382  236928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem
	I1122 00:42:13.447461  236928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/ca.pem (1078 bytes)
	I1122 00:42:13.447757  236928 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem, removing ...
	I1122 00:42:13.447773  236928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem
	I1122 00:42:13.447830  236928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/cert.pem (1123 bytes)
	I1122 00:42:13.447950  236928 exec_runner.go:144] found /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem, removing ...
	I1122 00:42:13.447962  236928 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem
	I1122 00:42:13.447992  236928 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21934-2332/.minikube/key.pem (1675 bytes)
	I1122 00:42:13.448056  236928 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem org=jenkins.newest-cni-953404 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-953404]
	I1122 00:42:13.840746  236928 provision.go:177] copyRemoteCerts
	I1122 00:42:13.840815  236928 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1122 00:42:13.840860  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:13.861889  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:13.963235  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1122 00:42:13.983157  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1122 00:42:14.002290  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1122 00:42:14.024190  236928 provision.go:87] duration metric: took 596.238809ms to configureAuth
	I1122 00:42:14.024263  236928 ubuntu.go:206] setting minikube options for container-runtime
	I1122 00:42:14.024517  236928 config.go:182] Loaded profile config "newest-cni-953404": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:42:14.024533  236928 machine.go:97] duration metric: took 4.191627738s to provisionDockerMachine
	I1122 00:42:14.024542  236928 start.go:293] postStartSetup for "newest-cni-953404" (driver="docker")
	I1122 00:42:14.024552  236928 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1122 00:42:14.024614  236928 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1122 00:42:14.024669  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:14.043165  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:14.143482  236928 ssh_runner.go:195] Run: cat /etc/os-release
	I1122 00:42:14.146978  236928 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1122 00:42:14.147007  236928 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1122 00:42:14.147019  236928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/addons for local assets ...
	I1122 00:42:14.147071  236928 filesync.go:126] Scanning /home/jenkins/minikube-integration/21934-2332/.minikube/files for local assets ...
	I1122 00:42:14.147150  236928 filesync.go:149] local asset: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem -> 56232.pem in /etc/ssl/certs
	I1122 00:42:14.147263  236928 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1122 00:42:14.154798  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:42:14.173955  236928 start.go:296] duration metric: took 149.397354ms for postStartSetup
	I1122 00:42:14.174049  236928 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:42:14.174087  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:14.191675  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:14.289323  236928 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1122 00:42:14.294245  236928 fix.go:56] duration metric: took 4.877059458s for fixHost
	I1122 00:42:14.294273  236928 start.go:83] releasing machines lock for "newest-cni-953404", held for 4.877112825s
	I1122 00:42:14.294348  236928 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-953404
	I1122 00:42:14.311794  236928 ssh_runner.go:195] Run: cat /version.json
	I1122 00:42:14.311857  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:14.312127  236928 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1122 00:42:14.312191  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:14.337390  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:14.347680  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:14.542295  236928 ssh_runner.go:195] Run: systemctl --version
	I1122 00:42:14.549194  236928 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1122 00:42:14.554185  236928 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1122 00:42:14.554315  236928 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1122 00:42:14.563333  236928 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1122 00:42:14.563413  236928 start.go:496] detecting cgroup driver to use...
	I1122 00:42:14.563476  236928 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1122 00:42:14.563604  236928 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1122 00:42:14.581922  236928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1122 00:42:14.597805  236928 docker.go:218] disabling cri-docker service (if available) ...
	I1122 00:42:14.597905  236928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1122 00:42:14.613882  236928 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1122 00:42:14.627697  236928 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1122 00:42:14.754622  236928 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1122 00:42:14.880516  236928 docker.go:234] disabling docker service ...
	I1122 00:42:14.880663  236928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1122 00:42:14.896367  236928 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1122 00:42:14.911158  236928 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1122 00:42:15.056656  236928 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1122 00:42:15.213418  236928 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1122 00:42:15.227844  236928 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1122 00:42:15.243131  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1122 00:42:15.252426  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1122 00:42:15.261942  236928 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1122 00:42:15.262010  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1122 00:42:15.271498  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:42:15.281747  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1122 00:42:15.291527  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1122 00:42:15.302088  236928 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1122 00:42:15.311530  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1122 00:42:15.320697  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1122 00:42:15.329960  236928 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1122 00:42:15.339450  236928 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1122 00:42:15.347734  236928 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1122 00:42:15.355970  236928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:42:15.480115  236928 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1122 00:42:15.644354  236928 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1122 00:42:15.644426  236928 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1122 00:42:15.648867  236928 start.go:564] Will wait 60s for crictl version
	I1122 00:42:15.648932  236928 ssh_runner.go:195] Run: which crictl
	I1122 00:42:15.652674  236928 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1122 00:42:15.684129  236928 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.1.5
	RuntimeApiVersion:  v1
	I1122 00:42:15.684214  236928 ssh_runner.go:195] Run: containerd --version
	I1122 00:42:15.704501  236928 ssh_runner.go:195] Run: containerd --version
	I1122 00:42:15.733779  236928 out.go:179] * Preparing Kubernetes v1.34.1 on containerd 2.1.5 ...
	I1122 00:42:15.736760  236928 cli_runner.go:164] Run: docker network inspect newest-cni-953404 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1122 00:42:15.753983  236928 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1122 00:42:15.758093  236928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:42:15.772421  236928 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1122 00:42:15.775471  236928 kubeadm.go:884] updating cluster {Name:newest-cni-953404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1122 00:42:15.775747  236928 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1122 00:42:15.775839  236928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:42:15.810469  236928 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:42:15.810496  236928 containerd.go:534] Images already preloaded, skipping extraction
	I1122 00:42:15.810555  236928 ssh_runner.go:195] Run: sudo crictl images --output json
	I1122 00:42:15.837530  236928 containerd.go:627] all images are preloaded for containerd runtime.
	I1122 00:42:15.837615  236928 cache_images.go:86] Images are preloaded, skipping loading
	I1122 00:42:15.837631  236928 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 containerd true true} ...
	I1122 00:42:15.837747  236928 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-953404 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1122 00:42:15.837817  236928 ssh_runner.go:195] Run: sudo crictl info
	I1122 00:42:15.865968  236928 cni.go:84] Creating CNI manager for ""
	I1122 00:42:15.865995  236928 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1122 00:42:15.866019  236928 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1122 00:42:15.866041  236928 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-953404 NodeName:newest-cni-953404 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1122 00:42:15.866181  236928 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-953404"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1122 00:42:15.866254  236928 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1122 00:42:15.877540  236928 binaries.go:51] Found k8s binaries, skipping transfer
	I1122 00:42:15.877610  236928 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1122 00:42:15.887060  236928 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1122 00:42:15.902271  236928 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1122 00:42:15.916946  236928 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2228 bytes)
	I1122 00:42:15.931863  236928 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1122 00:42:15.936336  236928 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1122 00:42:15.948430  236928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:42:16.128382  236928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:42:16.166236  236928 certs.go:69] Setting up /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404 for IP: 192.168.76.2
	I1122 00:42:16.166259  236928 certs.go:195] generating shared ca certs ...
	I1122 00:42:16.166275  236928 certs.go:227] acquiring lock for ca certs: {Name:mk348a892ec4309987f6c81ee1acef4884ca62db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:42:16.166510  236928 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key
	I1122 00:42:16.166588  236928 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key
	I1122 00:42:16.166602  236928 certs.go:257] generating profile certs ...
	I1122 00:42:16.166727  236928 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/client.key
	I1122 00:42:16.166847  236928 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/apiserver.key.146c0f14
	I1122 00:42:16.166936  236928 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/proxy-client.key
	I1122 00:42:16.167094  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem (1338 bytes)
	W1122 00:42:16.167142  236928 certs.go:480] ignoring /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623_empty.pem, impossibly tiny 0 bytes
	I1122 00:42:16.167171  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca-key.pem (1679 bytes)
	I1122 00:42:16.167226  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/ca.pem (1078 bytes)
	I1122 00:42:16.167297  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/cert.pem (1123 bytes)
	I1122 00:42:16.167361  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/certs/key.pem (1675 bytes)
	I1122 00:42:16.167439  236928 certs.go:484] found cert: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem (1708 bytes)
	I1122 00:42:16.168325  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1122 00:42:16.198086  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1122 00:42:16.217572  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1122 00:42:16.269129  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1122 00:42:16.297544  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1122 00:42:16.320873  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1122 00:42:16.342539  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1122 00:42:16.369321  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/newest-cni-953404/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1122 00:42:16.396977  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/ssl/certs/56232.pem --> /usr/share/ca-certificates/56232.pem (1708 bytes)
	I1122 00:42:16.422938  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1122 00:42:16.444609  236928 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21934-2332/.minikube/certs/5623.pem --> /usr/share/ca-certificates/5623.pem (1338 bytes)
	I1122 00:42:16.466176  236928 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1122 00:42:16.497348  236928 ssh_runner.go:195] Run: openssl version
	I1122 00:42:16.504746  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/56232.pem && ln -fs /usr/share/ca-certificates/56232.pem /etc/ssl/certs/56232.pem"
	I1122 00:42:16.515693  236928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/56232.pem
	I1122 00:42:16.519552  236928 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 21 23:55 /usr/share/ca-certificates/56232.pem
	I1122 00:42:16.519694  236928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/56232.pem
	I1122 00:42:16.564054  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/56232.pem /etc/ssl/certs/3ec20f2e.0"
	I1122 00:42:16.572704  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1122 00:42:16.581175  236928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:42:16.585430  236928 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 21 23:48 /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:42:16.585521  236928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1122 00:42:16.628666  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1122 00:42:16.636768  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/5623.pem && ln -fs /usr/share/ca-certificates/5623.pem /etc/ssl/certs/5623.pem"
	I1122 00:42:16.646049  236928 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/5623.pem
	I1122 00:42:16.650199  236928 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 21 23:55 /usr/share/ca-certificates/5623.pem
	I1122 00:42:16.650339  236928 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/5623.pem
	I1122 00:42:16.692407  236928 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/5623.pem /etc/ssl/certs/51391683.0"
	I1122 00:42:16.700387  236928 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1122 00:42:16.704248  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1122 00:42:16.745460  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1122 00:42:16.788624  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1122 00:42:16.838463  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1122 00:42:16.882311  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1122 00:42:16.958637  236928 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1122 00:42:17.018402  236928 kubeadm.go:401] StartCluster: {Name:newest-cni-953404 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-953404 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0
CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1122 00:42:17.018555  236928 cri.go:54] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1122 00:42:17.018662  236928 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1122 00:42:17.079681  236928 cri.go:89] found id: "040561f5e693598ddda2a1207b1e7450e001274e60a6a6155b1bddfe4f764632"
	I1122 00:42:17.079757  236928 cri.go:89] found id: "c0576803f36ce2eb1cbdaaa03dc3304cfa7c0e14964ab51d3157094f62e7cef6"
	I1122 00:42:17.079776  236928 cri.go:89] found id: "9569b37164273b550b32e4a4842a6b3487c8dbbfe1bea214d492edde0ea68a04"
	I1122 00:42:17.079792  236928 cri.go:89] found id: "68964f2029378c8753880049e8a138f6d732ad285c7ae266ed075c1534a25aff"
	I1122 00:42:17.079823  236928 cri.go:89] found id: "fddb58d80a874884e3b278956f97291cc577695d419a243e23fb51e1f93cc7f1"
	I1122 00:42:17.079844  236928 cri.go:89] found id: "320d30b6e6043992eff65040cbb828d8621c9e556fe74b8deb765d4fcd67b371"
	I1122 00:42:17.079863  236928 cri.go:89] found id: ""
	I1122 00:42:17.079955  236928 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	I1122 00:42:17.116873  236928 cri.go:116] JSON = [{"ociVersion":"1.2.1","id":"312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f","pid":873,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f/rootfs","created":"2025-11-22T00:42:16.952414551Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_etcd-newest-cni-953404_118f80961ff0b26c227409b9cb092e20","io.kubernetes.cri.sandbox-memory":"0","io
.kubernetes.cri.sandbox-name":"etcd-newest-cni-953404","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"118f80961ff0b26c227409b9cb092e20"},"owner":"root"},{"ociVersion":"1.2.1","id":"af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968","pid":925,"status":"running","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968/rootfs","created":"2025-11-22T00:42:17.016293515Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"102","io.kubernetes.cri.sandbox-id":"af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968","io.kubernetes.cri.sandbox-log-direc
tory":"/var/log/pods/kube-system_kube-scheduler-newest-cni-953404_74bca79c7ffdc531f69f5a5a221a97bd","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-scheduler-newest-cni-953404","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"74bca79c7ffdc531f69f5a5a221a97bd"},"owner":"root"},{"ociVersion":"1.2.1","id":"b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473","pid":952,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473/rootfs","created":"0001-01-01T00:00:00Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares
":"204","io.kubernetes.cri.sandbox-id":"b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-controller-manager-newest-cni-953404_200368f9345b84e4f3c70e4a4d3c9c77","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-controller-manager-newest-cni-953404","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"200368f9345b84e4f3c70e4a4d3c9c77"},"owner":"root"},{"ociVersion":"1.2.1","id":"bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6","pid":954,"status":"created","bundle":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6","rootfs":"/run/containerd/io.containerd.runtime.v2.task/k8s.io/bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6/rootfs","created":"2025-11-22T00:42:17.072127226Z","annotations":{"io.kubernetes.cri.container-type":"sandbox","io.kubernetes.cri.podsandbox.
image-name":"registry.k8s.io/pause:3.10.1","io.kubernetes.cri.sandbox-cpu-period":"100000","io.kubernetes.cri.sandbox-cpu-quota":"0","io.kubernetes.cri.sandbox-cpu-shares":"256","io.kubernetes.cri.sandbox-id":"bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6","io.kubernetes.cri.sandbox-log-directory":"/var/log/pods/kube-system_kube-apiserver-newest-cni-953404_af7b377ee1ea510aa305430d7d26bd6c","io.kubernetes.cri.sandbox-memory":"0","io.kubernetes.cri.sandbox-name":"kube-apiserver-newest-cni-953404","io.kubernetes.cri.sandbox-namespace":"kube-system","io.kubernetes.cri.sandbox-uid":"af7b377ee1ea510aa305430d7d26bd6c"},"owner":"root"}]
	I1122 00:42:17.117089  236928 cri.go:126] list returned 4 containers
	I1122 00:42:17.117119  236928 cri.go:129] container: {ID:312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f Status:running}
	I1122 00:42:17.117151  236928 cri.go:131] skipping 312b85cd4750c0a79f087dde7e1f0335ea863ad7aba02c204b2bea3b6d4dfa4f - not in ps
	I1122 00:42:17.117172  236928 cri.go:129] container: {ID:af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968 Status:running}
	I1122 00:42:17.117206  236928 cri.go:131] skipping af9c7e373fb0bb21637702b3f576d554b5020ab987e93bc38184046f58c51968 - not in ps
	I1122 00:42:17.117232  236928 cri.go:129] container: {ID:b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473 Status:created}
	I1122 00:42:17.117254  236928 cri.go:131] skipping b61727ff1a6dfcdf3a9a91a530989efe301d3607d757f37dd923ad5d8827a473 - not in ps
	I1122 00:42:17.117272  236928 cri.go:129] container: {ID:bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6 Status:created}
	I1122 00:42:17.117291  236928 cri.go:131] skipping bdc1ea304b34bb13ef30cf37e48213fab89eed18fbff3b26a6f59801da0730f6 - not in ps
	I1122 00:42:17.117368  236928 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1122 00:42:17.131858  236928 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1122 00:42:17.131920  236928 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1122 00:42:17.132003  236928 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1122 00:42:17.148556  236928 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1122 00:42:17.149158  236928 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-953404" does not appear in /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:42:17.149432  236928 kubeconfig.go:62] /home/jenkins/minikube-integration/21934-2332/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-953404" cluster setting kubeconfig missing "newest-cni-953404" context setting]
	I1122 00:42:17.149859  236928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:42:17.152680  236928 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1122 00:42:17.168995  236928 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1122 00:42:17.169031  236928 kubeadm.go:602] duration metric: took 37.090512ms to restartPrimaryControlPlane
	I1122 00:42:17.169041  236928 kubeadm.go:403] duration metric: took 150.649637ms to StartCluster
	I1122 00:42:17.169056  236928 settings.go:142] acquiring lock: {Name:mk5b79634916fd13f05f4c848ff3e8b07cafa39b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:42:17.169123  236928 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:42:17.170062  236928 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/kubeconfig: {Name:mk4be876f293ebe51b23aabd893a8dda3d55dd2e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1122 00:42:17.170284  236928 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1122 00:42:17.170639  236928 config.go:182] Loaded profile config "newest-cni-953404": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:42:17.170687  236928 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1122 00:42:17.170753  236928 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-953404"
	I1122 00:42:17.170769  236928 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-953404"
	W1122 00:42:17.170780  236928 addons.go:248] addon storage-provisioner should already be in state true
	I1122 00:42:17.170800  236928 host.go:66] Checking if "newest-cni-953404" exists ...
	I1122 00:42:17.171268  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.171897  236928 addons.go:70] Setting dashboard=true in profile "newest-cni-953404"
	I1122 00:42:17.171915  236928 addons.go:239] Setting addon dashboard=true in "newest-cni-953404"
	W1122 00:42:17.171922  236928 addons.go:248] addon dashboard should already be in state true
	I1122 00:42:17.171945  236928 host.go:66] Checking if "newest-cni-953404" exists ...
	I1122 00:42:17.172378  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.172647  236928 addons.go:70] Setting metrics-server=true in profile "newest-cni-953404"
	I1122 00:42:17.172686  236928 addons.go:239] Setting addon metrics-server=true in "newest-cni-953404"
	W1122 00:42:17.172693  236928 addons.go:248] addon metrics-server should already be in state true
	I1122 00:42:17.172719  236928 host.go:66] Checking if "newest-cni-953404" exists ...
	I1122 00:42:17.173134  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.176149  236928 addons.go:70] Setting default-storageclass=true in profile "newest-cni-953404"
	I1122 00:42:17.176173  236928 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-953404"
	I1122 00:42:17.176478  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.177179  236928 out.go:179] * Verifying Kubernetes components...
	I1122 00:42:17.193351  236928 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1122 00:42:17.255358  236928 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1122 00:42:17.255443  236928 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1122 00:42:17.255460  236928 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1122 00:42:17.260887  236928 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1122 00:42:17.260953  236928 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1122 00:42:17.260964  236928 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1122 00:42:17.261033  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:17.261199  236928 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:42:17.261206  236928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1122 00:42:17.261240  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:17.263885  236928 addons.go:239] Setting addon default-storageclass=true in "newest-cni-953404"
	W1122 00:42:17.263907  236928 addons.go:248] addon default-storageclass should already be in state true
	I1122 00:42:17.263931  236928 host.go:66] Checking if "newest-cni-953404" exists ...
	I1122 00:42:17.264341  236928 cli_runner.go:164] Run: docker container inspect newest-cni-953404 --format={{.State.Status}}
	I1122 00:42:17.264654  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1122 00:42:17.264674  236928 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1122 00:42:17.264723  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:17.329394  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:17.336393  236928 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1122 00:42:17.336414  236928 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1122 00:42:17.336470  236928 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-953404
	I1122 00:42:17.339971  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:17.343841  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:17.371991  236928 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33093 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/newest-cni-953404/id_rsa Username:docker}
	I1122 00:42:17.507688  236928 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1122 00:42:17.659628  236928 api_server.go:52] waiting for apiserver process to appear ...
	I1122 00:42:17.659754  236928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:42:17.694149  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:42:17.743521  236928 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1122 00:42:17.743603  236928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1122 00:42:17.928202  236928 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1122 00:42:17.928284  236928 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1122 00:42:17.930343  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1122 00:42:17.939166  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1122 00:42:17.939240  236928 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1122 00:42:18.001646  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1122 00:42:18.001723  236928 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1122 00:42:18.034642  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1122 00:42:18.034716  236928 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	W1122 00:42:18.049175  236928 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1122 00:42:18.049265  236928 retry.go:31] will retry after 240.226221ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1122 00:42:18.075196  236928 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:42:18.075221  236928 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1122 00:42:18.128742  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1122 00:42:18.128761  236928 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1122 00:42:18.159854  236928 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:42:18.173773  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1122 00:42:18.236745  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1122 00:42:18.236818  236928 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1122 00:42:18.290481  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1122 00:42:18.330438  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1122 00:42:18.330510  236928 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1122 00:42:18.423392  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1122 00:42:18.423465  236928 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1122 00:42:18.531040  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1122 00:42:18.531115  236928 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1122 00:42:18.700673  236928 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1122 00:42:18.700747  236928 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1122 00:42:18.737381  236928 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	332b006afd8fc       1611cd07b61d5       11 seconds ago      Running             busybox                   0                   280ef6671af19       busybox                                     default
	03be686d9bf27       66749159455b3       17 seconds ago      Running             storage-provisioner       0                   588e35f6934bf       storage-provisioner                         kube-system
	a2377a9d9ca92       138784d87c9c5       17 seconds ago      Running             coredns                   0                   0f08c4ebd039b       coredns-66bc5c9577-7ddjv                    kube-system
	3311d8b509d77       b1a8c6f707935       29 seconds ago      Running             kindnet-cni               0                   a6265b3e05cd8       kindnet-72xnf                               kube-system
	bb89b752fb8c9       05baa95f5142d       32 seconds ago      Running             kube-proxy                0                   5ec85718e24b2       kube-proxy-m2v57                            kube-system
	2ab595e5f196d       43911e833d64d       51 seconds ago      Running             kube-apiserver            0                   153ce7a50b1b3       kube-apiserver-no-preload-734654            kube-system
	e638f47ef1c96       a1894772a478e       51 seconds ago      Running             etcd                      0                   681920238f189       etcd-no-preload-734654                      kube-system
	90ef3180c03a0       7eb2c6ff0c5a7       51 seconds ago      Running             kube-controller-manager   0                   6fff6b589eb64       kube-controller-manager-no-preload-734654   kube-system
	eaaf10be34890       b5f57ec6b9867       51 seconds ago      Running             kube-scheduler            0                   0cb5fc9e7bb0b       kube-scheduler-no-preload-734654            kube-system
	
	
	==> containerd <==
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.389432802Z" level=info msg="connecting to shim 588e35f6934bf424ea37b5aa19e1dd25628cee1f02396ad27d96a3437d9434b1" address="unix:///run/containerd/s/37ec126b77ad1a04f8a1db3ca8ed97bd0dd01f0a0f9bdc028fe532dab4768b85" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.515793994Z" level=info msg="StartContainer for \"a2377a9d9ca9276f76a49240cc0701616e2930409d19ee65fbd80752f8f71ffa\" returns successfully"
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.621205942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:storage-provisioner,Uid:62e9fefe-8213-4869-badc-8ff66248f8fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"588e35f6934bf424ea37b5aa19e1dd25628cee1f02396ad27d96a3437d9434b1\""
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.631967242Z" level=info msg="CreateContainer within sandbox \"588e35f6934bf424ea37b5aa19e1dd25628cee1f02396ad27d96a3437d9434b1\" for container &ContainerMetadata{Name:storage-provisioner,Attempt:0,}"
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.646107246Z" level=info msg="Container 03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.661615949Z" level=info msg="CreateContainer within sandbox \"588e35f6934bf424ea37b5aa19e1dd25628cee1f02396ad27d96a3437d9434b1\" for &ContainerMetadata{Name:storage-provisioner,Attempt:0,} returns container id \"03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b\""
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.664082033Z" level=info msg="StartContainer for \"03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b\""
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.665739360Z" level=info msg="connecting to shim 03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b" address="unix:///run/containerd/s/37ec126b77ad1a04f8a1db3ca8ed97bd0dd01f0a0f9bdc028fe532dab4768b85" protocol=ttrpc version=3
	Nov 22 00:42:10 no-preload-734654 containerd[758]: time="2025-11-22T00:42:10.757768295Z" level=info msg="StartContainer for \"03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b\" returns successfully"
	Nov 22 00:42:13 no-preload-734654 containerd[758]: time="2025-11-22T00:42:13.489007173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0,Namespace:default,Attempt:0,}"
	Nov 22 00:42:13 no-preload-734654 containerd[758]: time="2025-11-22T00:42:13.596202037Z" level=info msg="connecting to shim 280ef6671af19befe08c3abbc381d1a0117ffcaec8d092cbad21720ae554d4ea" address="unix:///run/containerd/s/8a6c35978fbe912d4c674dfc7fcc2ecf2221e1fa92a4e5102640e1590b092ed9" namespace=k8s.io protocol=ttrpc version=3
	Nov 22 00:42:13 no-preload-734654 containerd[758]: time="2025-11-22T00:42:13.749477326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:busybox,Uid:3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0,Namespace:default,Attempt:0,} returns sandbox id \"280ef6671af19befe08c3abbc381d1a0117ffcaec8d092cbad21720ae554d4ea\""
	Nov 22 00:42:13 no-preload-734654 containerd[758]: time="2025-11-22T00:42:13.755282843Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.076956708Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.080401028Z" level=info msg="stop pulling image gcr.io/k8s-minikube/busybox:1.28.4-glibc: active requests=0, bytes read=1937189"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.082512630Z" level=info msg="ImageCreate event name:\"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.086607954Z" level=info msg="ImageCreate event name:\"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.088106124Z" level=info msg="Pulled image \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" with image id \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\", repo tag \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\", repo digest \"gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e\", size \"1935750\" in 2.332615188s"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.088258167Z" level=info msg="PullImage \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\" returns image reference \"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c\""
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.094739320Z" level=info msg="CreateContainer within sandbox \"280ef6671af19befe08c3abbc381d1a0117ffcaec8d092cbad21720ae554d4ea\" for container &ContainerMetadata{Name:busybox,Attempt:0,}"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.110971725Z" level=info msg="Container 332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b: CDI devices from CRI Config.CDIDevices: []"
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.121577765Z" level=info msg="CreateContainer within sandbox \"280ef6671af19befe08c3abbc381d1a0117ffcaec8d092cbad21720ae554d4ea\" for &ContainerMetadata{Name:busybox,Attempt:0,} returns container id \"332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b\""
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.125222565Z" level=info msg="StartContainer for \"332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b\""
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.128383173Z" level=info msg="connecting to shim 332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b" address="unix:///run/containerd/s/8a6c35978fbe912d4c674dfc7fcc2ecf2221e1fa92a4e5102640e1590b092ed9" protocol=ttrpc version=3
	Nov 22 00:42:16 no-preload-734654 containerd[758]: time="2025-11-22T00:42:16.284701250Z" level=info msg="StartContainer for \"332b006afd8fcd765c000454a646b2d4213dd34f4d7b1dd0b69c32c11b9f535b\" returns successfully"
	
	
	==> coredns [a2377a9d9ca9276f76a49240cc0701616e2930409d19ee65fbd80752f8f71ffa] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38189 - 9040 "HINFO IN 8651196616265383532.1837598247671117162. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.021609404s
	
	
	==> describe nodes <==
	Name:               no-preload-734654
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-734654
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=299bbe887a12c40541707cc636234f35f4ff1785
	                    minikube.k8s.io/name=no-preload-734654
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_22T00_41_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 22 Nov 2025 00:41:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-734654
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 22 Nov 2025 00:42:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 22 Nov 2025 00:42:21 +0000   Sat, 22 Nov 2025 00:41:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 22 Nov 2025 00:42:21 +0000   Sat, 22 Nov 2025 00:41:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 22 Nov 2025 00:42:21 +0000   Sat, 22 Nov 2025 00:41:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 22 Nov 2025 00:42:21 +0000   Sat, 22 Nov 2025 00:42:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-734654
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c92eea4d3c03c0156e01fcf8691e3907
	  System UUID:                2d98f259-ac3c-4a59-b21a-68b0575348bc
	  Boot ID:                    4e86741a-5896-4eb6-97ce-70ea8beedc67
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  containerd://2.1.5
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kube-system                 coredns-66bc5c9577-7ddjv                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     33s
	  kube-system                 etcd-no-preload-734654                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         38s
	  kube-system                 kindnet-72xnf                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-no-preload-734654             250m (12%)    0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-no-preload-734654    200m (10%)    0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-m2v57                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-no-preload-734654             100m (5%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 31s                kube-proxy       
	  Normal   NodeAllocatableEnforced  53s                kubelet          Updated Node Allocatable limit across pods
	  Warning  CgroupV1                 53s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  53s (x8 over 53s)  kubelet          Node no-preload-734654 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    53s (x8 over 53s)  kubelet          Node no-preload-734654 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     53s (x7 over 53s)  kubelet          Node no-preload-734654 status is now: NodeHasSufficientPID
	  Normal   Starting                 53s                kubelet          Starting kubelet.
	  Warning  CgroupV1                 38s                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   Starting                 38s                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  38s                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  38s                kubelet          Node no-preload-734654 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    38s                kubelet          Node no-preload-734654 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     38s                kubelet          Node no-preload-734654 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           35s                node-controller  Node no-preload-734654 event: Registered Node no-preload-734654 in Controller
	  Normal   NodeReady                19s                kubelet          Node no-preload-734654 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov21 23:17] ACPI: SRAT not present
	[  +0.000000] ACPI: SRAT not present
	[  +0.000000] SPI driver altr_a10sr has no spi_device_id for altr,a10sr
	[  +0.017121] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.498034] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.037542] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +0.808656] ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
	[  +6.648915] kauditd_printk_skb: 36 callbacks suppressed
	[Nov21 23:58] FS-Cache: Duplicate cookie detected
	[  +0.000753] FS-Cache: O-cookie c=00000013 [p=00000002 fl=222 nc=0 na=1]
	[  +0.001054] FS-Cache: O-cookie d=00000000f9ea0775{9P.session} n=0000000035823f74
	[  +0.001177] FS-Cache: O-key=[10] '34323935353131333738'
	[  +0.000819] FS-Cache: N-cookie c=00000014 [p=00000002 fl=2 nc=0 na=1]
	[  +0.000982] FS-Cache: N-cookie d=00000000f9ea0775{9P.session} n=00000000dbfd8515
	[  +0.001154] FS-Cache: N-key=[10] '34323935353131333738'
	[Nov22 00:00] hrtimer: interrupt took 9958927 ns
	
	
	==> etcd [e638f47ef1c96a374c96f40ec7088eb9c066047ef70a224e9f97e7ff919069b4] <==
	{"level":"warn","ts":"2025-11-22T00:41:41.964388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35046","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.023284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.076292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.112204Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.148741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.207988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.239938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.280106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.332107Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.361124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35210","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.427411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.447386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.501262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.538899Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.587654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.650386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.685563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.740413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.768685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.819758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.860668Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.907995Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.952940Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:42.994938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-22T00:41:43.275841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35480","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 00:42:28 up  1:24,  0 user,  load average: 5.32, 3.89, 3.13
	Linux no-preload-734654 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3311d8b509d77a934414dc4e7df24b222e0d39fbc0b723442a8e5e7c930a8ef4] <==
	I1122 00:41:59.313687       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1122 00:41:59.313989       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1122 00:41:59.314179       1 main.go:148] setting mtu 1500 for CNI 
	I1122 00:41:59.314200       1 main.go:178] kindnetd IP family: "ipv4"
	I1122 00:41:59.314210       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-22T00:41:59Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1122 00:41:59.519946       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1122 00:41:59.520202       1 controller.go:381] "Waiting for informer caches to sync"
	I1122 00:41:59.520286       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1122 00:41:59.616699       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1122 00:41:59.720477       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1122 00:41:59.720566       1 metrics.go:72] Registering metrics
	I1122 00:41:59.720672       1 controller.go:711] "Syncing nftables rules"
	I1122 00:42:09.522686       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:42:09.522725       1 main.go:301] handling current node
	I1122 00:42:19.515805       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1122 00:42:19.516051       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2ab595e5f196dc17ecf5b0fc15d3903663047a05453ec4e7edbc28055be790fa] <==
	I1122 00:41:45.869298       1 autoregister_controller.go:144] Starting autoregister controller
	I1122 00:41:45.869304       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1122 00:41:45.869309       1 cache.go:39] Caches are synced for autoregister controller
	I1122 00:41:45.871607       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1122 00:41:45.871664       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1122 00:41:45.889701       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1122 00:41:45.906795       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:41:46.086892       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1122 00:41:46.113967       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1122 00:41:46.120563       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1122 00:41:47.989605       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1122 00:41:48.121810       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1122 00:41:48.244361       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1122 00:41:48.257309       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1122 00:41:48.258772       1 controller.go:667] quota admission added evaluator for: endpoints
	I1122 00:41:48.268313       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1122 00:41:48.781176       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1122 00:41:50.250118       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1122 00:41:50.279528       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1122 00:41:50.300123       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1122 00:41:54.531903       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:41:54.559290       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1122 00:41:54.573178       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1122 00:41:54.981847       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1122 00:42:23.380743       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:52956: use of closed network connection
	
	
	==> kube-controller-manager [90ef3180c03a0166582d94e9b36cee4461ebe824827b4d18bae507872b9e520f] <==
	I1122 00:41:53.852723       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1122 00:41:53.857947       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1122 00:41:53.858291       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:41:53.861513       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1122 00:41:53.861678       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1122 00:41:53.861877       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1122 00:41:53.862036       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1122 00:41:53.862141       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1122 00:41:53.862155       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1122 00:41:53.862561       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1122 00:41:53.866885       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1122 00:41:53.867478       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1122 00:41:53.867505       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1122 00:41:53.870299       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1122 00:41:53.870839       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1122 00:41:53.871020       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:41:53.871193       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1122 00:41:53.873243       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1122 00:41:53.874499       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1122 00:41:53.879662       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1122 00:41:53.890304       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1122 00:41:53.914811       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1122 00:41:53.914993       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1122 00:41:53.915068       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1122 00:42:13.861991       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [bb89b752fb8c9bd33331d89cd18385d39b6b8c698e4464bf4aa55cfad64548e7] <==
	I1122 00:41:56.134602       1 server_linux.go:53] "Using iptables proxy"
	I1122 00:41:56.229724       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1122 00:41:56.333363       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1122 00:41:56.333401       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1122 00:41:56.333485       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1122 00:41:56.462677       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1122 00:41:56.462762       1 server_linux.go:132] "Using iptables Proxier"
	I1122 00:41:56.468347       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1122 00:41:56.468789       1 server.go:527] "Version info" version="v1.34.1"
	I1122 00:41:56.468805       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:41:56.473912       1 config.go:200] "Starting service config controller"
	I1122 00:41:56.473930       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1122 00:41:56.473951       1 config.go:106] "Starting endpoint slice config controller"
	I1122 00:41:56.473955       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1122 00:41:56.473967       1 config.go:403] "Starting serviceCIDR config controller"
	I1122 00:41:56.473970       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1122 00:41:56.481426       1 config.go:309] "Starting node config controller"
	I1122 00:41:56.481462       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1122 00:41:56.481472       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1122 00:41:56.574757       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1122 00:41:56.574789       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1122 00:41:56.574830       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [eaaf10be348904a85c908c1eafcae30b2dfe6ee0280a7611486f32fecb5800a3] <==
	I1122 00:41:46.030965       1 serving.go:386] Generated self-signed cert in-memory
	I1122 00:41:49.366453       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1122 00:41:49.366548       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1122 00:41:49.371489       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1122 00:41:49.371910       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1122 00:41:49.371986       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1122 00:41:49.372058       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1122 00:41:49.382672       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:41:49.382821       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:41:49.383006       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:41:49.383047       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1122 00:41:49.472811       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1122 00:41:49.483903       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1122 00:41:49.484729       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 22 00:41:51 no-preload-734654 kubelet[2106]: I1122 00:41:51.579132    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-734654" podStartSLOduration=5.579111109 podStartE2EDuration="5.579111109s" podCreationTimestamp="2025-11-22 00:41:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:51.555043317 +0000 UTC m=+1.369998709" watchObservedRunningTime="2025-11-22 00:41:51.579111109 +0000 UTC m=+1.394066493"
	Nov 22 00:41:51 no-preload-734654 kubelet[2106]: I1122 00:41:51.600905    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-734654" podStartSLOduration=1.6008858 podStartE2EDuration="1.6008858s" podCreationTimestamp="2025-11-22 00:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:51.579513395 +0000 UTC m=+1.394468779" watchObservedRunningTime="2025-11-22 00:41:51.6008858 +0000 UTC m=+1.415841184"
	Nov 22 00:41:51 no-preload-734654 kubelet[2106]: I1122 00:41:51.628034    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-734654" podStartSLOduration=1.628014736 podStartE2EDuration="1.628014736s" podCreationTimestamp="2025-11-22 00:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:51.601183338 +0000 UTC m=+1.416138730" watchObservedRunningTime="2025-11-22 00:41:51.628014736 +0000 UTC m=+1.442970128"
	Nov 22 00:41:51 no-preload-734654 kubelet[2106]: I1122 00:41:51.649718    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-734654" podStartSLOduration=1.649698275 podStartE2EDuration="1.649698275s" podCreationTimestamp="2025-11-22 00:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:51.628392112 +0000 UTC m=+1.443347496" watchObservedRunningTime="2025-11-22 00:41:51.649698275 +0000 UTC m=+1.464653659"
	Nov 22 00:41:53 no-preload-734654 kubelet[2106]: I1122 00:41:53.911994    2106 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 22 00:41:53 no-preload-734654 kubelet[2106]: I1122 00:41:53.913751    2106 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919151    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eed2027d-917a-415e-ad0e-2c5496b01040-lib-modules\") pod \"kindnet-72xnf\" (UID: \"eed2027d-917a-415e-ad0e-2c5496b01040\") " pod="kube-system/kindnet-72xnf"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919204    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e6d1b10-56de-4646-b8d8-9ca98489dca8-lib-modules\") pod \"kube-proxy-m2v57\" (UID: \"7e6d1b10-56de-4646-b8d8-9ca98489dca8\") " pod="kube-system/kube-proxy-m2v57"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919225    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s97q9\" (UniqueName: \"kubernetes.io/projected/7e6d1b10-56de-4646-b8d8-9ca98489dca8-kube-api-access-s97q9\") pod \"kube-proxy-m2v57\" (UID: \"7e6d1b10-56de-4646-b8d8-9ca98489dca8\") " pod="kube-system/kube-proxy-m2v57"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919255    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e6d1b10-56de-4646-b8d8-9ca98489dca8-xtables-lock\") pod \"kube-proxy-m2v57\" (UID: \"7e6d1b10-56de-4646-b8d8-9ca98489dca8\") " pod="kube-system/kube-proxy-m2v57"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919279    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/eed2027d-917a-415e-ad0e-2c5496b01040-cni-cfg\") pod \"kindnet-72xnf\" (UID: \"eed2027d-917a-415e-ad0e-2c5496b01040\") " pod="kube-system/kindnet-72xnf"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919297    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eed2027d-917a-415e-ad0e-2c5496b01040-xtables-lock\") pod \"kindnet-72xnf\" (UID: \"eed2027d-917a-415e-ad0e-2c5496b01040\") " pod="kube-system/kindnet-72xnf"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919313    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dj52\" (UniqueName: \"kubernetes.io/projected/eed2027d-917a-415e-ad0e-2c5496b01040-kube-api-access-6dj52\") pod \"kindnet-72xnf\" (UID: \"eed2027d-917a-415e-ad0e-2c5496b01040\") " pod="kube-system/kindnet-72xnf"
	Nov 22 00:41:54 no-preload-734654 kubelet[2106]: I1122 00:41:54.919342    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e6d1b10-56de-4646-b8d8-9ca98489dca8-kube-proxy\") pod \"kube-proxy-m2v57\" (UID: \"7e6d1b10-56de-4646-b8d8-9ca98489dca8\") " pod="kube-system/kube-proxy-m2v57"
	Nov 22 00:41:55 no-preload-734654 kubelet[2106]: I1122 00:41:55.111751    2106 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Nov 22 00:41:56 no-preload-734654 kubelet[2106]: I1122 00:41:56.872722    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m2v57" podStartSLOduration=2.8727033349999997 podStartE2EDuration="2.872703335s" podCreationTimestamp="2025-11-22 00:41:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:41:56.872438601 +0000 UTC m=+6.687393993" watchObservedRunningTime="2025-11-22 00:41:56.872703335 +0000 UTC m=+6.687658760"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.616690    2106 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.660771    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-72xnf" podStartSLOduration=12.567362305 podStartE2EDuration="15.660728742s" podCreationTimestamp="2025-11-22 00:41:54 +0000 UTC" firstStartedPulling="2025-11-22 00:41:55.797081334 +0000 UTC m=+5.612036718" lastFinishedPulling="2025-11-22 00:41:58.890447771 +0000 UTC m=+8.705403155" observedRunningTime="2025-11-22 00:41:59.863660294 +0000 UTC m=+9.678615686" watchObservedRunningTime="2025-11-22 00:42:09.660728742 +0000 UTC m=+19.475684142"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.777110    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/03d0ec5c-3721-4533-9cb8-f5210335d7a6-config-volume\") pod \"coredns-66bc5c9577-7ddjv\" (UID: \"03d0ec5c-3721-4533-9cb8-f5210335d7a6\") " pod="kube-system/coredns-66bc5c9577-7ddjv"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.777319    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67lxt\" (UniqueName: \"kubernetes.io/projected/03d0ec5c-3721-4533-9cb8-f5210335d7a6-kube-api-access-67lxt\") pod \"coredns-66bc5c9577-7ddjv\" (UID: \"03d0ec5c-3721-4533-9cb8-f5210335d7a6\") " pod="kube-system/coredns-66bc5c9577-7ddjv"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.878152    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/62e9fefe-8213-4869-badc-8ff66248f8fa-tmp\") pod \"storage-provisioner\" (UID: \"62e9fefe-8213-4869-badc-8ff66248f8fa\") " pod="kube-system/storage-provisioner"
	Nov 22 00:42:09 no-preload-734654 kubelet[2106]: I1122 00:42:09.878346    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nq5b\" (UniqueName: \"kubernetes.io/projected/62e9fefe-8213-4869-badc-8ff66248f8fa-kube-api-access-5nq5b\") pod \"storage-provisioner\" (UID: \"62e9fefe-8213-4869-badc-8ff66248f8fa\") " pod="kube-system/storage-provisioner"
	Nov 22 00:42:10 no-preload-734654 kubelet[2106]: I1122 00:42:10.867482    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.867451052 podStartE2EDuration="14.867451052s" podCreationTimestamp="2025-11-22 00:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:42:10.867274869 +0000 UTC m=+20.682230253" watchObservedRunningTime="2025-11-22 00:42:10.867451052 +0000 UTC m=+20.682406436"
	Nov 22 00:42:13 no-preload-734654 kubelet[2106]: I1122 00:42:13.174967    2106 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-7ddjv" podStartSLOduration=18.174946892 podStartE2EDuration="18.174946892s" podCreationTimestamp="2025-11-22 00:41:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-22 00:42:10.900674691 +0000 UTC m=+20.715630091" watchObservedRunningTime="2025-11-22 00:42:13.174946892 +0000 UTC m=+22.989902292"
	Nov 22 00:42:13 no-preload-734654 kubelet[2106]: I1122 00:42:13.215162    2106 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lc2s\" (UniqueName: \"kubernetes.io/projected/3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0-kube-api-access-8lc2s\") pod \"busybox\" (UID: \"3e7f3b0f-2b43-4a0d-bf2a-130affdd4fe0\") " pod="default/busybox"
	
	
	==> storage-provisioner [03be686d9bf27577a5f972e5ca62f8eae31da9b4c02de60460a09417981e371b] <==
	I1122 00:42:10.773819       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1122 00:42:10.776330       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:10.785727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:42:10.786043       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1122 00:42:10.786341       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-734654_8075120c-2162-4a78-a265-bb3566c525f1!
	I1122 00:42:10.786459       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fd06f4be-ec1a-459d-89a0-7f78a8051ba5", APIVersion:"v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-734654_8075120c-2162-4a78-a265-bb3566c525f1 became leader
	W1122 00:42:10.796523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:10.804397       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1122 00:42:10.886776       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-734654_8075120c-2162-4a78-a265-bb3566c525f1!
	W1122 00:42:12.808289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:12.813654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:14.817698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:14.826142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:16.829176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:16.836895       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:18.840379       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:18.845234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:20.848418       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:20.856115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:22.860563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:22.867777       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:24.875698       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:24.889139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:26.893272       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1122 00:42:26.903446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-734654 -n no-preload-734654
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-734654 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (16.54s)

                                                
                                    

Test pass (299/333)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 35.65
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 35.81
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.09
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
21 TestBinaryMirror 0.58
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 160.97
29 TestAddons/serial/Volcano 42.66
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.06
35 TestAddons/parallel/Registry 17.08
36 TestAddons/parallel/RegistryCreds 0.73
37 TestAddons/parallel/Ingress 20
38 TestAddons/parallel/InspektorGadget 11.83
39 TestAddons/parallel/MetricsServer 6
41 TestAddons/parallel/CSI 52.01
42 TestAddons/parallel/Headlamp 17.94
43 TestAddons/parallel/CloudSpanner 5.68
44 TestAddons/parallel/LocalPath 53.58
45 TestAddons/parallel/NvidiaDevicePlugin 6.04
46 TestAddons/parallel/Yakd 11.81
48 TestAddons/StoppedEnableDisable 12.39
49 TestCertOptions 36.98
50 TestCertExpiration 232.8
52 TestForceSystemdFlag 37.33
53 TestForceSystemdEnv 44.78
54 TestDockerEnvContainerd 49.58
58 TestErrorSpam/setup 31.23
59 TestErrorSpam/start 0.93
60 TestErrorSpam/status 1.15
61 TestErrorSpam/pause 1.72
62 TestErrorSpam/unpause 1.86
63 TestErrorSpam/stop 2.29
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 81.25
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.92
70 TestFunctional/serial/KubeContext 0.07
71 TestFunctional/serial/KubectlGetPods 0.11
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.38
75 TestFunctional/serial/CacheCmd/cache/add_local 1.2
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
77 TestFunctional/serial/CacheCmd/cache/list 0.05
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.83
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 51.41
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.48
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 5.23
89 TestFunctional/parallel/ConfigCmd 0.42
90 TestFunctional/parallel/DashboardCmd 10.46
91 TestFunctional/parallel/DryRun 0.63
92 TestFunctional/parallel/InternationalLanguage 0.29
93 TestFunctional/parallel/StatusCmd 1.28
97 TestFunctional/parallel/ServiceCmdConnect 10.74
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 27.3
101 TestFunctional/parallel/SSHCmd 0.75
102 TestFunctional/parallel/CpCmd 2.25
104 TestFunctional/parallel/FileSync 0.35
105 TestFunctional/parallel/CertSync 2.1
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
113 TestFunctional/parallel/License 0.3
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.75
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.45
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 7.23
126 TestFunctional/parallel/ServiceCmd/List 0.52
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.58
128 TestFunctional/parallel/ProfileCmd/profile_not_create 0.57
129 TestFunctional/parallel/ServiceCmd/HTTPS 0.49
130 TestFunctional/parallel/ProfileCmd/profile_list 0.55
131 TestFunctional/parallel/ServiceCmd/Format 0.5
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.59
133 TestFunctional/parallel/ServiceCmd/URL 0.53
134 TestFunctional/parallel/MountCmd/any-port 8.85
135 TestFunctional/parallel/MountCmd/specific-port 1.49
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.32
137 TestFunctional/parallel/Version/short 0.08
138 TestFunctional/parallel/Version/components 1.31
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.28
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.27
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.32
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.93
144 TestFunctional/parallel/ImageCommands/Setup 0.67
145 TestFunctional/parallel/UpdateContextCmd/no_changes 0.19
146 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
147 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.23
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.3
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.27
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.38
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.39
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.63
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 192.84
163 TestMultiControlPlane/serial/DeployApp 7.41
164 TestMultiControlPlane/serial/PingHostFromPods 1.61
165 TestMultiControlPlane/serial/AddWorkerNode 61.39
166 TestMultiControlPlane/serial/NodeLabels 0.11
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.11
168 TestMultiControlPlane/serial/CopyFile 20.26
169 TestMultiControlPlane/serial/StopSecondaryNode 12.96
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.82
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.83
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.5
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 98.62
174 TestMultiControlPlane/serial/DeleteSecondaryNode 11.24
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.81
176 TestMultiControlPlane/serial/StopCluster 36.46
177 TestMultiControlPlane/serial/RestartCluster 60.45
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.8
179 TestMultiControlPlane/serial/AddSecondaryNode 92.77
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.18
185 TestJSONOutput/start/Command 82.41
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Command 0.74
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Command 0.63
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 1.45
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 46.99
211 TestKicCustomNetwork/use_default_bridge_network 33.81
212 TestKicExistingNetwork 32.91
213 TestKicCustomSubnet 34.27
214 TestKicStaticIP 39.21
215 TestMainNoArgs 0.05
216 TestMinikubeProfile 70.44
219 TestMountStart/serial/StartWithMountFirst 8.16
220 TestMountStart/serial/VerifyMountFirst 0.26
221 TestMountStart/serial/StartWithMountSecond 8.63
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.7
224 TestMountStart/serial/VerifyMountPostDelete 0.29
225 TestMountStart/serial/Stop 1.31
226 TestMountStart/serial/RestartStopped 7.39
227 TestMountStart/serial/VerifyMountPostStop 0.28
230 TestMultiNode/serial/FreshStart2Nodes 135.52
231 TestMultiNode/serial/DeployApp2Nodes 5.37
232 TestMultiNode/serial/PingHostFrom2Pods 0.98
233 TestMultiNode/serial/AddNode 58.26
234 TestMultiNode/serial/MultiNodeLabels 0.09
235 TestMultiNode/serial/ProfileList 0.7
236 TestMultiNode/serial/CopyFile 10.47
237 TestMultiNode/serial/StopNode 2.47
238 TestMultiNode/serial/StartAfterStop 7.81
239 TestMultiNode/serial/RestartKeepsNodes 74.54
240 TestMultiNode/serial/DeleteNode 5.92
241 TestMultiNode/serial/StopMultiNode 24.12
242 TestMultiNode/serial/RestartMultiNode 57.9
243 TestMultiNode/serial/ValidateNameConflict 36.02
248 TestPreload 118.61
250 TestScheduledStopUnix 107.65
253 TestInsufficientStorage 13.32
254 TestRunningBinaryUpgrade 69.31
256 TestKubernetesUpgrade 352.33
257 TestMissingContainerUpgrade 168.21
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 40.74
261 TestNoKubernetes/serial/StartWithStopK8s 24.76
262 TestNoKubernetes/serial/Start 9.29
263 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
264 TestNoKubernetes/serial/VerifyK8sNotRunning 0.44
265 TestNoKubernetes/serial/ProfileList 3.51
266 TestNoKubernetes/serial/Stop 1.4
267 TestNoKubernetes/serial/StartNoArgs 6.6
268 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.27
269 TestStoppedBinaryUpgrade/Setup 8
270 TestStoppedBinaryUpgrade/Upgrade 52.85
271 TestStoppedBinaryUpgrade/MinikubeLogs 1.47
280 TestPause/serial/Start 52.93
281 TestPause/serial/SecondStartNoReconfiguration 6.5
282 TestPause/serial/Pause 0.71
283 TestPause/serial/VerifyStatus 0.34
284 TestPause/serial/Unpause 0.61
285 TestPause/serial/PauseAgain 0.91
286 TestPause/serial/DeletePaused 2.97
287 TestPause/serial/VerifyDeletedResources 0.45
295 TestNetworkPlugins/group/false 5.66
300 TestStartStop/group/old-k8s-version/serial/FirstStart 59.58
302 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.22
303 TestStartStop/group/old-k8s-version/serial/Stop 12.15
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
305 TestStartStop/group/old-k8s-version/serial/SecondStart 28.69
306 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.01
307 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
308 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
309 TestStartStop/group/old-k8s-version/serial/Pause 3.15
311 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 92.14
313 TestStartStop/group/embed-certs/serial/FirstStart 82.41
315 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.07
316 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.36
318 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
319 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 58.89
320 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.57
321 TestStartStop/group/embed-certs/serial/Stop 12.95
322 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.2
323 TestStartStop/group/embed-certs/serial/SecondStart 54.4
324 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
325 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.09
326 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
327 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.18
329 TestStartStop/group/no-preload/serial/FirstStart 73.91
330 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
331 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.12
332 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.35
333 TestStartStop/group/embed-certs/serial/Pause 4.05
335 TestStartStop/group/newest-cni/serial/FirstStart 43.06
336 TestStartStop/group/newest-cni/serial/DeployApp 0
337 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1
338 TestStartStop/group/newest-cni/serial/Stop 1.37
339 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
340 TestStartStop/group/newest-cni/serial/SecondStart 20.27
342 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.28
345 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.46
346 TestStartStop/group/newest-cni/serial/Pause 3.88
347 TestStartStop/group/no-preload/serial/Stop 12.65
348 TestNetworkPlugins/group/auto/Start 84.59
349 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
350 TestStartStop/group/no-preload/serial/SecondStart 58.02
351 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
352 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.11
353 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
354 TestStartStop/group/no-preload/serial/Pause 3.1
355 TestNetworkPlugins/group/kindnet/Start 83.67
356 TestNetworkPlugins/group/auto/KubeletFlags 0.39
357 TestNetworkPlugins/group/auto/NetCatPod 11.42
358 TestNetworkPlugins/group/auto/DNS 0.24
359 TestNetworkPlugins/group/auto/Localhost 0.2
360 TestNetworkPlugins/group/auto/HairPin 0.2
361 TestNetworkPlugins/group/calico/Start 71.32
362 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
363 TestNetworkPlugins/group/kindnet/KubeletFlags 0.4
364 TestNetworkPlugins/group/kindnet/NetCatPod 10.42
365 TestNetworkPlugins/group/kindnet/DNS 0.24
366 TestNetworkPlugins/group/kindnet/Localhost 0.17
367 TestNetworkPlugins/group/kindnet/HairPin 0.16
368 TestNetworkPlugins/group/calico/ControllerPod 6.01
369 TestNetworkPlugins/group/calico/KubeletFlags 0.46
370 TestNetworkPlugins/group/calico/NetCatPod 11.39
371 TestNetworkPlugins/group/custom-flannel/Start 68.7
372 TestNetworkPlugins/group/calico/DNS 0.28
373 TestNetworkPlugins/group/calico/Localhost 0.19
374 TestNetworkPlugins/group/calico/HairPin 0.15
375 TestNetworkPlugins/group/enable-default-cni/Start 51.28
376 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.35
377 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.32
378 TestNetworkPlugins/group/custom-flannel/DNS 0.19
379 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
380 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
381 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
382 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.28
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.25
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.24
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
386 TestNetworkPlugins/group/flannel/Start 67.62
387 TestNetworkPlugins/group/bridge/Start 87.03
388 TestNetworkPlugins/group/flannel/ControllerPod 6.01
389 TestNetworkPlugins/group/flannel/KubeletFlags 0.3
390 TestNetworkPlugins/group/flannel/NetCatPod 10.32
391 TestNetworkPlugins/group/flannel/DNS 0.18
392 TestNetworkPlugins/group/flannel/Localhost 0.16
393 TestNetworkPlugins/group/flannel/HairPin 0.14
394 TestNetworkPlugins/group/bridge/KubeletFlags 0.41
395 TestNetworkPlugins/group/bridge/NetCatPod 11.37
396 TestNetworkPlugins/group/bridge/DNS 0.2
397 TestNetworkPlugins/group/bridge/Localhost 0.15
398 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.28.0/json-events (35.65s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-431646 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-431646 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (35.654648528s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (35.65s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1121 23:47:39.819534    5623 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1121 23:47:39.819632    5623 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-431646
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-431646: exit status 85 (86.541145ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-431646 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-431646 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:47:04
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:47:04.210345    5628 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:47:04.210459    5628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:04.210468    5628 out.go:374] Setting ErrFile to fd 2...
	I1121 23:47:04.210473    5628 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:04.210729    5628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	W1121 23:47:04.210856    5628 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21934-2332/.minikube/config/config.json: open /home/jenkins/minikube-integration/21934-2332/.minikube/config/config.json: no such file or directory
	I1121 23:47:04.211272    5628 out.go:368] Setting JSON to true
	I1121 23:47:04.212055    5628 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1762,"bootTime":1763767063,"procs":151,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1121 23:47:04.212127    5628 start.go:143] virtualization:  
	I1121 23:47:04.217484    5628 out.go:99] [download-only-431646] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W1121 23:47:04.217662    5628 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball: no such file or directory
	I1121 23:47:04.217767    5628 notify.go:221] Checking for updates...
	I1121 23:47:04.221405    5628 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:47:04.224413    5628 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:47:04.227361    5628 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1121 23:47:04.230221    5628 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1121 23:47:04.233259    5628 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1121 23:47:04.239096    5628 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 23:47:04.239364    5628 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:47:04.270220    5628 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 23:47:04.270331    5628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:04.677718    5628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-21 23:47:04.668168569 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:04.677826    5628 docker.go:319] overlay module found
	I1121 23:47:04.681011    5628 out.go:99] Using the docker driver based on user configuration
	I1121 23:47:04.681052    5628 start.go:309] selected driver: docker
	I1121 23:47:04.681059    5628 start.go:930] validating driver "docker" against <nil>
	I1121 23:47:04.681157    5628 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:04.734967    5628 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-11-21 23:47:04.726264188 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:04.735122    5628 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:47:04.735429    5628 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1121 23:47:04.735627    5628 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 23:47:04.738630    5628 out.go:171] Using Docker driver with root privileges
	I1121 23:47:04.741672    5628 cni.go:84] Creating CNI manager for ""
	I1121 23:47:04.741742    5628 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 23:47:04.741757    5628 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:47:04.741847    5628 start.go:353] cluster config:
	{Name:download-only-431646 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-431646 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:47:04.744808    5628 out.go:99] Starting "download-only-431646" primary control-plane node in "download-only-431646" cluster
	I1121 23:47:04.744837    5628 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 23:47:04.747805    5628 out.go:99] Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:47:04.747849    5628 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 23:47:04.747891    5628 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:47:04.765040    5628 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:47:04.765233    5628 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:47:04.765341    5628 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:47:04.804595    5628 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1121 23:47:04.804620    5628 cache.go:65] Caching tarball of preloaded images
	I1121 23:47:04.804797    5628 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 23:47:04.808168    5628 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1121 23:47:04.808216    5628 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1121 23:47:04.902542    5628 preload.go:295] Got checksum from GCS API "38d7f581f2fa4226c8af2c9106b982b7"
	I1121 23:47:04.902679    5628 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4?checksum=md5:38d7f581f2fa4226c8af2c9106b982b7 -> /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-arm64.tar.lz4
	I1121 23:47:09.732736    5628 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e as a tarball
	I1121 23:47:39.124245    5628 cache.go:68] Finished verifying existence of preloaded tar for v1.28.0 on containerd
	I1121 23:47:39.124690    5628 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/download-only-431646/config.json ...
	I1121 23:47:39.124734    5628 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/download-only-431646/config.json: {Name:mkeaffa747d059f2ad0f99888749f32736d834e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:47:39.124943    5628 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1121 23:47:39.125175    5628 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21934-2332/.minikube/cache/linux/arm64/v1.28.0/kubectl
	
	
	* The control-plane node download-only-431646 host does not exist
	  To start a cluster, run: "minikube start -p download-only-431646"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-431646
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (35.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-281768 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-281768 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (35.807679577s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (35.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1121 23:48:16.085293    5623 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
I1121 23:48:16.085327    5623 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-281768
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-281768: exit status 85 (87.737045ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                         ARGS                                                                                          │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-431646 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-431646 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                 │ minikube             │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ delete  │ -p download-only-431646                                                                                                                                                               │ download-only-431646 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │ 21 Nov 25 23:47 UTC │
	│ start   │ -o=json --download-only -p download-only-281768 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd │ download-only-281768 │ jenkins │ v1.37.0 │ 21 Nov 25 23:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/21 23:47:40
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.25.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1121 23:47:40.327334    5827 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:47:40.327465    5827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:40.327502    5827 out.go:374] Setting ErrFile to fd 2...
	I1121 23:47:40.327514    5827 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:47:40.327773    5827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1121 23:47:40.328156    5827 out.go:368] Setting JSON to true
	I1121 23:47:40.328857    5827 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":1798,"bootTime":1763767063,"procs":143,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1121 23:47:40.328919    5827 start.go:143] virtualization:  
	I1121 23:47:40.332244    5827 out.go:99] [download-only-281768] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 23:47:40.332525    5827 notify.go:221] Checking for updates...
	I1121 23:47:40.336099    5827 out.go:171] MINIKUBE_LOCATION=21934
	I1121 23:47:40.339023    5827 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:47:40.341920    5827 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1121 23:47:40.344737    5827 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1121 23:47:40.347777    5827 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W1121 23:47:40.353609    5827 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1121 23:47:40.353886    5827 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:47:40.388167    5827 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 23:47:40.388285    5827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:40.448236    5827 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 23:47:40.438967049 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:40.448355    5827 docker.go:319] overlay module found
	I1121 23:47:40.451282    5827 out.go:99] Using the docker driver based on user configuration
	I1121 23:47:40.451313    5827 start.go:309] selected driver: docker
	I1121 23:47:40.451320    5827 start.go:930] validating driver "docker" against <nil>
	I1121 23:47:40.451431    5827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:47:40.510966    5827 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:27 OomKillDisable:true NGoroutines:43 SystemTime:2025-11-21 23:47:40.501768716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:47:40.511126    5827 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1121 23:47:40.511443    5827 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I1121 23:47:40.511618    5827 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1121 23:47:40.514863    5827 out.go:171] Using Docker driver with root privileges
	I1121 23:47:40.517632    5827 cni.go:84] Creating CNI manager for ""
	I1121 23:47:40.517697    5827 cni.go:143] "docker" driver + "containerd" runtime found, recommending kindnet
	I1121 23:47:40.517709    5827 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1121 23:47:40.517791    5827 start.go:353] cluster config:
	{Name:download-only-281768 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-281768 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:47:40.520766    5827 out.go:99] Starting "download-only-281768" primary control-plane node in "download-only-281768" cluster
	I1121 23:47:40.520789    5827 cache.go:134] Beginning downloading kic base image for docker with containerd
	I1121 23:47:40.523697    5827 out.go:99] Pulling base image v0.0.48-1763588073-21934 ...
	I1121 23:47:40.523734    5827 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 23:47:40.523832    5827 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local docker daemon
	I1121 23:47:40.539942    5827 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e to local cache
	I1121 23:47:40.540066    5827 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory
	I1121 23:47:40.540091    5827 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e in local cache directory, skipping pull
	I1121 23:47:40.540097    5827 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e exists in cache, skipping pull
	I1121 23:47:40.540104    5827 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e as a tarball
	I1121 23:47:40.579269    5827 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1121 23:47:40.579300    5827 cache.go:65] Caching tarball of preloaded images
	I1121 23:47:40.579476    5827 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 23:47:40.582684    5827 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1121 23:47:40.582715    5827 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4 from gcs api...
	I1121 23:47:40.669853    5827 preload.go:295] Got checksum from GCS API "435977642a202d20ca04f26d87d875a8"
	I1121 23:47:40.669906    5827 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4?checksum=md5:435977642a202d20ca04f26d87d875a8 -> /home/jenkins/minikube-integration/21934-2332/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-containerd-overlay2-arm64.tar.lz4
	I1121 23:48:15.309490    5827 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on containerd
	I1121 23:48:15.309856    5827 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/download-only-281768/config.json ...
	I1121 23:48:15.309888    5827 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/download-only-281768/config.json: {Name:mk891af3d3a940347bb76052c8ae69593531ed10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1121 23:48:15.310081    5827 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime containerd
	I1121 23:48:15.310237    5827 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/21934-2332/.minikube/cache/linux/arm64/v1.34.1/kubectl
	
	
	* The control-plane node download-only-281768 host does not exist
	  To start a cluster, run: "minikube start -p download-only-281768"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-281768
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
I1121 23:48:17.222955    5623 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-503204 --alsologtostderr --binary-mirror http://127.0.0.1:37363 --driver=docker  --container-runtime=containerd
helpers_test.go:175: Cleaning up "binary-mirror-503204" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-503204
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-336804
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-336804: exit status 85 (71.45836ms)

                                                
                                                
-- stdout --
	* Profile "addons-336804" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-336804"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-336804
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-336804: exit status 85 (68.871421ms)

                                                
                                                
-- stdout --
	* Profile "addons-336804" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-336804"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (160.97s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-336804 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-336804 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m40.970296153s)
--- PASS: TestAddons/Setup (160.97s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.66s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:884: volcano-controller stabilized in 34.783705ms
addons_test.go:868: volcano-scheduler stabilized in 35.461874ms
addons_test.go:876: volcano-admission stabilized in 35.602799ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-76c996c8bf-6msw7" [03029de5-7296-4b66-9250-12f92a209fd4] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.005155406s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-6c447bd768-pvz2l" [138f622e-96f5-49f1-bc1d-d2c68d23121c] Pending / Ready:ContainersNotReady (containers with unready status: [admission]) / ContainersReady:ContainersNotReady (containers with unready status: [admission])
helpers_test.go:352: "volcano-admission-6c447bd768-pvz2l" [138f622e-96f5-49f1-bc1d-d2c68d23121c] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 7.003336281s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-6fd4f85cb8-6dkj8" [965a73a3-6bad-4a38-9ccf-56e089166e9c] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.003044812s
addons_test.go:903: (dbg) Run:  kubectl --context addons-336804 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-336804 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-336804 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [1d9ae515-e22d-408e-8437-ec2452ad0108] Pending
helpers_test.go:352: "test-job-nginx-0" [1d9ae515-e22d-408e-8437-ec2452ad0108] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [1d9ae515-e22d-408e-8437-ec2452ad0108] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.004001672s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-336804 addons disable volcano --alsologtostderr -v=1: (12.092684826s)
--- PASS: TestAddons/serial/Volcano (42.66s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-336804 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-336804 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.06s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-336804 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-336804 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e81ddc63-d750-4f78-9dba-22587eeced4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e81ddc63-d750-4f78-9dba-22587eeced4a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.003732367s
addons_test.go:694: (dbg) Run:  kubectl --context addons-336804 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-336804 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-336804 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-336804 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.06s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.08s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 5.185924ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-q4r7c" [e203ee2e-f6d5-4b16-a01c-0e0edeaf2073] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003941244s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-kvljm" [035807ad-9029-4685-960a-0fccc41c161f] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003926011s
addons_test.go:392: (dbg) Run:  kubectl --context addons-336804 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-336804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-336804 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.970307949s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 ip
2025/11/21 23:52:17 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.08s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 4.447589ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-336804
addons_test.go:332: (dbg) Run:  kubectl --context addons-336804 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.73s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (20s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-336804 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-336804 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-336804 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [d19b6a87-86ce-49ec-a654-8a3f7c01698e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [d19b6a87-86ce-49ec-a654-8a3f7c01698e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003369922s
I1121 23:53:39.161343    5623 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-336804 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-336804 addons disable ingress-dns --alsologtostderr -v=1: (1.335121996s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-336804 addons disable ingress --alsologtostderr -v=1: (7.947119411s)
--- PASS: TestAddons/parallel/Ingress (20.00s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-vngh6" [975872c3-b6ff-4e36-b719-ace6af0c4113] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.00367662s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-336804 addons disable inspektor-gadget --alsologtostderr -v=1: (5.829298093s)
--- PASS: TestAddons/parallel/InspektorGadget (11.83s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 42.437416ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-dtfp5" [8411a069-d875-4dea-b929-9e32ce16d2d8] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003805661s
addons_test.go:463: (dbg) Run:  kubectl --context addons-336804 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.00s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1121 23:52:42.468687    5623 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1121 23:52:42.474693    5623 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1121 23:52:42.474803    5623 kapi.go:107] duration metric: took 6.129372ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.164738ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-336804 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-336804 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [9083f976-c14f-4024-8b93-2593c749fe41] Pending
helpers_test.go:352: "task-pv-pod" [9083f976-c14f-4024-8b93-2593c749fe41] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [9083f976-c14f-4024-8b93-2593c749fe41] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003691119s
addons_test.go:572: (dbg) Run:  kubectl --context addons-336804 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-336804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-336804 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-336804 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-336804 delete pod task-pv-pod: (1.203933593s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-336804 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-336804 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-336804 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [99e9fabc-5b97-43f5-8a4e-77edefd6cfd2] Pending
helpers_test.go:352: "task-pv-pod-restore" [99e9fabc-5b97-43f5-8a4e-77edefd6cfd2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [99e9fabc-5b97-43f5-8a4e-77edefd6cfd2] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.004188182s
addons_test.go:614: (dbg) Run:  kubectl --context addons-336804 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-336804 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-336804 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-336804 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.805729853s)
--- PASS: TestAddons/parallel/CSI (52.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-336804 --alsologtostderr -v=1
addons_test.go:808: (dbg) Done: out/minikube-linux-arm64 addons enable headlamp -p addons-336804 --alsologtostderr -v=1: (1.077457933s)
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-6945c6f4d-8n9f7" [0868650f-40de-405b-9cd3-d7a9534ce2bd] Pending
helpers_test.go:352: "headlamp-6945c6f4d-8n9f7" [0868650f-40de-405b-9cd3-d7a9534ce2bd] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-6945c6f4d-8n9f7" [0868650f-40de-405b-9cd3-d7a9534ce2bd] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.00320779s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-336804 addons disable headlamp --alsologtostderr -v=1: (5.860742863s)
--- PASS: TestAddons/parallel/Headlamp (17.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-k5l7z" [f6bb80eb-ef9a-4238-b8e3-03f3acc5ec6c] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003461628s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.68s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.58s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-336804 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-336804 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-336804 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [3e522dac-4dd4-4a7f-b5ae-3ce0eb81835f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [3e522dac-4dd4-4a7f-b5ae-3ce0eb81835f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [3e522dac-4dd4-4a7f-b5ae-3ce0eb81835f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003824425s
addons_test.go:967: (dbg) Run:  kubectl --context addons-336804 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 ssh "cat /opt/local-path-provisioner/pvc-b47b1070-f1f5-42c8-b151-2d44425948ac_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-336804 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-336804 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-336804 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.058965914s)
--- PASS: TestAddons/parallel/LocalPath (53.58s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.04s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-m8wz5" [52caee61-c110-487f-a145-145935595ef3] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003975982s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-336804 addons disable nvidia-device-plugin --alsologtostderr -v=1: (1.037442154s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.04s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6jlgw" [844d7294-93fb-427c-88a9-275332404a3e] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.005321226s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-336804 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-336804 addons disable yakd --alsologtostderr -v=1: (5.799488357s)
--- PASS: TestAddons/parallel/Yakd (11.81s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.39s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-336804
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-336804: (12.10979307s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-336804
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-336804
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-336804
--- PASS: TestAddons/StoppedEnableDisable (12.39s)

                                                
                                    
x
+
TestCertOptions (36.98s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-089440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-089440 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (34.105345364s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-089440 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-089440 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-089440 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-089440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-089440
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-089440: (2.142861187s)
--- PASS: TestCertOptions (36.98s)

                                                
                                    
x
+
TestCertExpiration (232.8s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-285797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-285797 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=containerd: (41.378940859s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-285797 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-285797 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=containerd: (8.836072363s)
helpers_test.go:175: Cleaning up "cert-expiration-285797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-285797
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-285797: (2.576082022s)
--- PASS: TestCertExpiration (232.80s)

                                                
                                    
x
+
TestForceSystemdFlag (37.33s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-464314 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
E1122 00:33:18.987351    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-464314 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (34.546956206s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-464314 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-flag-464314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-464314
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-464314: (2.309984949s)
--- PASS: TestForceSystemdFlag (37.33s)

                                                
                                    
x
+
TestForceSystemdEnv (44.78s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-115975 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-115975 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (41.871441436s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-115975 ssh "cat /etc/containerd/config.toml"
helpers_test.go:175: Cleaning up "force-systemd-env-115975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-115975
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-115975: (2.524738514s)
--- PASS: TestForceSystemdEnv (44.78s)

                                                
                                    
x
+
TestDockerEnvContainerd (49.58s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd true linux arm64
docker_test.go:181: (dbg) Run:  out/minikube-linux-arm64 start -p dockerenv-049260 --driver=docker  --container-runtime=containerd
docker_test.go:181: (dbg) Done: out/minikube-linux-arm64 start -p dockerenv-049260 --driver=docker  --container-runtime=containerd: (33.719312972s)
docker_test.go:189: (dbg) Run:  /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-049260"
docker_test.go:189: (dbg) Done: /bin/bash -c "out/minikube-linux-arm64 docker-env --ssh-host --ssh-add -p dockerenv-049260": (1.052622958s)
docker_test.go:220: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-MclfNNb5pkgx/agent.25140" SSH_AGENT_PID="25141" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker version"
docker_test.go:243: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-MclfNNb5pkgx/agent.25140" SSH_AGENT_PID="25141" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env"
docker_test.go:243: (dbg) Done: /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-MclfNNb5pkgx/agent.25140" SSH_AGENT_PID="25141" DOCKER_HOST=ssh://docker@127.0.0.1:32773 DOCKER_BUILDKIT=0 docker build -t local/minikube-dockerenv-containerd-test:latest testdata/docker-env": (1.213837881s)
docker_test.go:250: (dbg) Run:  /bin/bash -c "SSH_AUTH_SOCK="/tmp/ssh-MclfNNb5pkgx/agent.25140" SSH_AGENT_PID="25141" DOCKER_HOST=ssh://docker@127.0.0.1:32773 docker image ls"
helpers_test.go:175: Cleaning up "dockerenv-049260" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p dockerenv-049260
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p dockerenv-049260: (2.147890881s)
--- PASS: TestDockerEnvContainerd (49.58s)

                                                
                                    
x
+
TestErrorSpam/setup (31.23s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-142633 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-142633 --driver=docker  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-142633 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-142633 --driver=docker  --container-runtime=containerd: (31.226319659s)
--- PASS: TestErrorSpam/setup (31.23s)

                                                
                                    
x
+
TestErrorSpam/start (0.93s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 start --dry-run
--- PASS: TestErrorSpam/start (0.93s)

                                                
                                    
x
+
TestErrorSpam/status (1.15s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 status
--- PASS: TestErrorSpam/status (1.15s)

                                                
                                    
x
+
TestErrorSpam/pause (1.72s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 pause
--- PASS: TestErrorSpam/pause (1.72s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 unpause
--- PASS: TestErrorSpam/unpause (1.86s)

                                                
                                    
x
+
TestErrorSpam/stop (2.29s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 stop: (2.086520235s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-142633 --log_dir /tmp/nospam-142633 stop
--- PASS: TestErrorSpam/stop (2.29s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21934-2332/.minikube/files/etc/test/nested/copy/5623/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (81.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E1121 23:55:58.854931    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:58.861347    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:58.872764    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:58.894142    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:58.935537    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:59.016940    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:59.178381    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:55:59.500054    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:56:00.142053    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:56:01.423417    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:56:03.984761    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:56:09.106860    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:56:19.351262    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1121 23:56:39.832528    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-656006 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (1m21.250165802s)
--- PASS: TestFunctional/serial/StartWithProxy (81.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.92s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1121 23:57:03.017068    5623 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656006 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-656006 --alsologtostderr -v=8: (6.920081106s)
functional_test.go:678: soft start took 6.92295897s for "functional-656006" cluster.
I1121 23:57:09.937453    5623 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.92s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.07s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-656006 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.11s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-656006 cache add registry.k8s.io/pause:3.1: (1.277092796s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-656006 cache add registry.k8s.io/pause:3.3: (1.06634277s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-arm64 -p functional-656006 cache add registry.k8s.io/pause:latest: (1.032212239s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-656006 /tmp/TestFunctionalserialCacheCmdcacheadd_local626253491/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 cache add minikube-local-cache-test:functional-656006
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 cache delete minikube-local-cache-test:functional-656006
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-656006
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656006 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (300.132713ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.83s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 kubectl -- --context functional-656006 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-656006 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (51.41s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1121 23:57:20.794768    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-656006 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (51.406168974s)
functional_test.go:776: restart took 51.406282731s for "functional-656006" cluster.
I1121 23:58:08.760173    5623 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (51.41s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-656006 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-656006 logs: (1.480271558s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 logs --file /tmp/TestFunctionalserialLogsFileCmd2086714594/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-656006 logs --file /tmp/TestFunctionalserialLogsFileCmd2086714594/001/logs.txt: (1.48169892s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.23s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-656006 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-656006
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-656006: exit status 115 (956.405325ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30742 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-656006 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-656006 delete -f testdata/invalidsvc.yaml: (1.012720424s)
--- PASS: TestFunctional/serial/InvalidService (5.23s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656006 config get cpus: exit status 14 (67.419979ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656006 config get cpus: exit status 14 (61.562368ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-656006 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-656006 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 40610: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-656006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (286.813031ms)

                                                
                                                
-- stdout --
	* [functional-656006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:58:50.093668   40193 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:58:50.094142   40193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:58:50.094153   40193 out.go:374] Setting ErrFile to fd 2...
	I1121 23:58:50.094159   40193 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:58:50.094955   40193 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1121 23:58:50.095504   40193 out.go:368] Setting JSON to false
	I1121 23:58:50.096699   40193 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2467,"bootTime":1763767063,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1121 23:58:50.096904   40193 start.go:143] virtualization:  
	I1121 23:58:50.100640   40193 out.go:179] * [functional-656006] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1121 23:58:50.104109   40193 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:58:50.105437   40193 notify.go:221] Checking for updates...
	I1121 23:58:50.110344   40193 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:58:50.113263   40193 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1121 23:58:50.116340   40193 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1121 23:58:50.119226   40193 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 23:58:50.122137   40193 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:58:50.125846   40193 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 23:58:50.126501   40193 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:58:50.164493   40193 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 23:58:50.164672   40193 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:58:50.277585   40193 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 23:58:50.266446314 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:58:50.277681   40193 docker.go:319] overlay module found
	I1121 23:58:50.280813   40193 out.go:179] * Using the docker driver based on existing profile
	I1121 23:58:50.284524   40193 start.go:309] selected driver: docker
	I1121 23:58:50.284545   40193 start.go:930] validating driver "docker" against &{Name:functional-656006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656006 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:58:50.284631   40193 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:58:50.288029   40193 out.go:203] 
	W1121 23:58:50.290855   40193 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1121 23:58:50.293613   40193 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656006 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-656006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-656006 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (289.990763ms)

                                                
                                                
-- stdout --
	* [functional-656006] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1121 23:58:49.793606   40101 out.go:360] Setting OutFile to fd 1 ...
	I1121 23:58:49.793771   40101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:58:49.793778   40101 out.go:374] Setting ErrFile to fd 2...
	I1121 23:58:49.793783   40101 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1121 23:58:49.794716   40101 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1121 23:58:49.795279   40101 out.go:368] Setting JSON to false
	I1121 23:58:49.796809   40101 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":2467,"bootTime":1763767063,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1121 23:58:49.796894   40101 start.go:143] virtualization:  
	I1121 23:58:49.800435   40101 out.go:179] * [functional-656006] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I1121 23:58:49.803739   40101 out.go:179]   - MINIKUBE_LOCATION=21934
	I1121 23:58:49.803817   40101 notify.go:221] Checking for updates...
	I1121 23:58:49.809581   40101 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1121 23:58:49.813044   40101 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1121 23:58:49.816069   40101 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1121 23:58:49.819272   40101 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1121 23:58:49.822162   40101 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1121 23:58:49.825537   40101 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1121 23:58:49.826096   40101 driver.go:422] Setting default libvirt URI to qemu:///system
	I1121 23:58:49.882003   40101 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1121 23:58:49.882149   40101 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1121 23:58:49.991600   40101 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-11-21 23:58:49.979777581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1121 23:58:49.991705   40101 docker.go:319] overlay module found
	I1121 23:58:49.994908   40101 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1121 23:58:49.997778   40101 start.go:309] selected driver: docker
	I1121 23:58:49.997799   40101 start.go:930] validating driver "docker" against &{Name:functional-656006 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763588073-21934@sha256:19d3da0413e1bfa354cbb88004c6796f8e9772a083e0230b0f6e50212ee04c7e Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-656006 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpt
ions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1121 23:58:49.997937   40101 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1121 23:58:50.001492   40101 out.go:203] 
	W1121 23:58:50.004673   40101 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1121 23:58:50.008069   40101 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-656006 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-656006 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-sm95v" [3f0e8042-5d35-4e96-999f-59c4a185fd4b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-sm95v" [3f0e8042-5d35-4e96-999f-59c4a185fd4b] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003981963s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31665
functional_test.go:1680: http://192.168.49.2:31665: success! body:
Request served by hello-node-connect-7d85dfc575-sm95v

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31665
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.74s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (27.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [237ddb93-5e09-40a7-9c58-5d6a0ab1dd0f] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004399622s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-656006 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-656006 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-656006 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-656006 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [530d94f6-1501-478a-913b-c41fe5b4b795] Pending
helpers_test.go:352: "sp-pod" [530d94f6-1501-478a-913b-c41fe5b4b795] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [530d94f6-1501-478a-913b-c41fe5b4b795] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003792858s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-656006 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-656006 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-656006 delete -f testdata/storage-provisioner/pod.yaml: (1.094478079s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-656006 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e43e8ff1-d486-4a32-b1d2-593fb041c18c] Pending
helpers_test.go:352: "sp-pod" [e43e8ff1-d486-4a32-b1d2-593fb041c18c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e43e8ff1-d486-4a32-b1d2-593fb041c18c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.00759712s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-656006 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (27.30s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh -n functional-656006 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 cp functional-656006:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4091045884/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh -n functional-656006 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh -n functional-656006 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/5623/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo cat /etc/test/nested/copy/5623/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/5623.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo cat /etc/ssl/certs/5623.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/5623.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo cat /usr/share/ca-certificates/5623.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/56232.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo cat /etc/ssl/certs/56232.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/56232.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo cat /usr/share/ca-certificates/56232.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-656006 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656006 ssh "sudo systemctl is-active docker": exit status 1 (314.164838ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656006 ssh "sudo systemctl is-active crio": exit status 1 (285.62881ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-656006 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-656006 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-656006 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 37839: os: process already finished
helpers_test.go:525: unable to kill pid 37632: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-656006 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-656006 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-656006 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [d4720c73-a6a4-400e-b75b-ccc2f9fcc813] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [d4720c73-a6a4-400e-b75b-ccc2f9fcc813] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003696831s
I1121 23:58:27.433164    5623 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-656006 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.168.154 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-656006 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-656006 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-656006 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-bfpcz" [e23508b3-891f-4867-a95d-359667a09c1a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-bfpcz" [e23508b3-891f-4867-a95d-359667a09c1a] Running
E1121 23:58:42.716547    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.003589449s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 service list -o json
functional_test.go:1504: Took "581.122348ms" to run "out/minikube-linux-arm64 -p functional-656006 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31162
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "485.619574ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "67.250173ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "510.855393ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "77.48796ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31162
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdany-port1804514646/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763769528328454165" to /tmp/TestFunctionalparallelMountCmdany-port1804514646/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763769528328454165" to /tmp/TestFunctionalparallelMountCmdany-port1804514646/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763769528328454165" to /tmp/TestFunctionalparallelMountCmdany-port1804514646/001/test-1763769528328454165
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656006 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (505.07908ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 23:58:48.833861    5623 retry.go:31] will retry after 566.282981ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 21 23:58 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 21 23:58 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 21 23:58 test-1763769528328454165
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh cat /mount-9p/test-1763769528328454165
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-656006 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [adcd8617-acb0-4d41-bcc2-a8c00025af84] Pending
helpers_test.go:352: "busybox-mount" [adcd8617-acb0-4d41-bcc2-a8c00025af84] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [adcd8617-acb0-4d41-bcc2-a8c00025af84] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [adcd8617-acb0-4d41-bcc2-a8c00025af84] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003554082s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-656006 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdany-port1804514646/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdspecific-port730526173/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdspecific-port730526173/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656006 ssh "sudo umount -f /mount-9p": exit status 1 (274.302742ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-656006 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdspecific-port730526173/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.49s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1954957042/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1954957042/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1954957042/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656006 ssh "findmnt -T" /mount1: exit status 1 (706.25773ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1121 23:58:59.376069    5623 retry.go:31] will retry after 482.648016ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh "findmnt -T" /mount3
2025/11/21 23:59:00 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-656006 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1954957042/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1954957042/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-656006 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1954957042/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-656006 version -o=json --components: (1.31487658s)
--- PASS: TestFunctional/parallel/Version/components (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-656006 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-656006
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-656006
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656006 image ls --format short --alsologtostderr:
I1121 23:59:07.949533   43354 out.go:360] Setting OutFile to fd 1 ...
I1121 23:59:07.949677   43354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:07.949687   43354 out.go:374] Setting ErrFile to fd 2...
I1121 23:59:07.949692   43354 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:07.950075   43354 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
I1121 23:59:07.950846   43354 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:07.951044   43354 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:07.951694   43354 cli_runner.go:164] Run: docker container inspect functional-656006 --format={{.State.Status}}
I1121 23:59:07.978163   43354 ssh_runner.go:195] Run: systemctl --version
I1121 23:59:07.978222   43354 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656006
I1121 23:59:07.998568   43354 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/functional-656006/id_rsa Username:docker}
I1121 23:59:08.118681   43354 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-656006 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:138784 │ 20.4MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0            │ sha256:a18947 │ 98.2MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.1            │ sha256:05baa9 │ 22.8MB │
│ docker.io/kicbase/echo-server               │ functional-656006  │ sha256:ce2d2c │ 2.17MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.1            │ sha256:b5f57e │ 15.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:d7b100 │ 268kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:b1a8c6 │ 40.6MB │
│ docker.io/library/nginx                     │ alpine             │ sha256:cbad63 │ 23.1MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.1            │ sha256:43911e │ 24.6MB │
│ registry.k8s.io/kube-controller-manager     │ v1.34.1            │ sha256:7eb2c6 │ 20.7MB │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:8057e0 │ 262kB  │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:3d1873 │ 249kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:8cb209 │ 71.3kB │
│ docker.io/library/nginx                     │ latest             │ sha256:bb747c │ 58.3MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:1611cd │ 1.94MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:ba04bb │ 8.03MB │
│ docker.io/library/minikube-local-cache-test │ functional-656006  │ sha256:1189c9 │ 992B   │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656006 image ls --format table --alsologtostderr:
I1121 23:59:08.539007   43527 out.go:360] Setting OutFile to fd 1 ...
I1121 23:59:08.539886   43527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:08.539952   43527 out.go:374] Setting ErrFile to fd 2...
I1121 23:59:08.539974   43527 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:08.540366   43527 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
I1121 23:59:08.541253   43527 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:08.541460   43527 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:08.542186   43527 cli_runner.go:164] Run: docker container inspect functional-656006 --format={{.State.Status}}
I1121 23:59:08.576306   43527 ssh_runner.go:195] Run: systemctl --version
I1121 23:59:08.576376   43527 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656006
I1121 23:59:08.607641   43527 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/functional-656006/id_rsa Username:docker}
I1121 23:59:08.710327   43527 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-656006 image ls --format json --alsologtostderr:
[{"id":"sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"262191"},{"id":"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"267939"},{"id":"sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"40636774"},{"id":"sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-656006"],"size":"2173567"},{"id":"sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1","repoDigests":["docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7f
d2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"23117513"},{"id":"sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"1935750"},{"id":"sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"8034419"},{"id":"sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"249461"},{"id":"sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"],"
repoTags":[],"size":"74084559"},{"id":"sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"18306114"},{"id":"sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42"],"repoTags":["docker.io/library/nginx:latest"],"size":"58263548"},{"id":"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"20392204"},{"id":"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a"],"repoTags":["regis
try.k8s.io/kube-proxy:v1.34.1"],"size":"22788047"},{"id":"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0","repoDigests":["registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"15779817"},{"id":"sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"71300"},{"id":"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"20720058"},{"id":"sha256:1189c95a51f4ffc789cf4fe3ce1215c6f6bf5428ef59b8dc4d2da57d8342b950","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-656006"],"size":"992"},{"id":"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a59
87083164bd00bc0e","repoDigests":["registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"98207481"},{"id":"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196","repoDigests":["registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"24571109"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656006 image ls --format json --alsologtostderr:
I1121 23:59:08.266635   43432 out.go:360] Setting OutFile to fd 1 ...
I1121 23:59:08.266789   43432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:08.266796   43432 out.go:374] Setting ErrFile to fd 2...
I1121 23:59:08.266802   43432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:08.267069   43432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
I1121 23:59:08.271174   43432 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:08.271397   43432 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:08.272191   43432 cli_runner.go:164] Run: docker container inspect functional-656006 --format={{.State.Status}}
I1121 23:59:08.294438   43432 ssh_runner.go:195] Run: systemctl --version
I1121 23:59:08.294497   43432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656006
I1121 23:59:08.314474   43432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/functional-656006/id_rsa Username:docker}
I1121 23:59:08.431082   43432 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-656006 image ls --format yaml --alsologtostderr:
- id: sha256:1189c95a51f4ffc789cf4fe3ce1215c6f6bf5428ef59b8dc4d2da57d8342b950
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-656006
size: "992"
- id: sha256:cbad6347cca28a6ee7b08793856bc6fcb2c2c7a377a62a5e6d785895c4194ac1
repoDigests:
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "23117513"
- id: sha256:bb747ca923a5e1139baddd6f4743e0c0c74df58f4ad8ddbc10ab183b92f5a5c7
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
repoTags:
- docker.io/library/nginx:latest
size: "58263548"
- id: sha256:ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "8034419"
- id: sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "267939"
- id: sha256:8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "262191"
- id: sha256:8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "71300"
- id: sha256:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "18306114"
- id: sha256:1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "1935750"
- id: sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "20720058"
- id: sha256:b1a8c6f707935fd5f346ce5846d21ff8dd65e14c15406a14dbd16b9b897b9b4c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "40636774"
- id: sha256:20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
repoTags: []
size: "74084559"
- id: sha256:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-656006
size: "2173567"
- id: sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "20392204"
- id: sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "24571109"
- id: sha256:3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "249461"
- id: sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "22788047"
- id: sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "15779817"
- id: sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests:
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "98207481"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656006 image ls --format yaml --alsologtostderr:
I1121 23:59:07.960857   43353 out.go:360] Setting OutFile to fd 1 ...
I1121 23:59:07.961129   43353 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:07.961159   43353 out.go:374] Setting ErrFile to fd 2...
I1121 23:59:07.961194   43353 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:07.961476   43353 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
I1121 23:59:07.962116   43353 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:07.962277   43353 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:07.962850   43353 cli_runner.go:164] Run: docker container inspect functional-656006 --format={{.State.Status}}
I1121 23:59:07.989362   43353 ssh_runner.go:195] Run: systemctl --version
I1121 23:59:07.989412   43353 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656006
I1121 23:59:08.020336   43353 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/functional-656006/id_rsa Username:docker}
I1121 23:59:08.130692   43353 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-656006 ssh pgrep buildkitd: exit status 1 (364.283008ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image build -t localhost/my-image:functional-656006 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-656006 image build -t localhost/my-image:functional-656006 testdata/build --alsologtostderr: (3.341448737s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-656006 image build -t localhost/my-image:functional-656006 testdata/build --alsologtostderr:
I1121 23:59:08.596901   43533 out.go:360] Setting OutFile to fd 1 ...
I1121 23:59:08.597158   43533 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:08.597172   43533 out.go:374] Setting ErrFile to fd 2...
I1121 23:59:08.597179   43533 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1121 23:59:08.597465   43533 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
I1121 23:59:08.598096   43533 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:08.599669   43533 config.go:182] Loaded profile config "functional-656006": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
I1121 23:59:08.605596   43533 cli_runner.go:164] Run: docker container inspect functional-656006 --format={{.State.Status}}
I1121 23:59:08.636784   43533 ssh_runner.go:195] Run: systemctl --version
I1121 23:59:08.636842   43533 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-656006
I1121 23:59:08.662102   43533 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/functional-656006/id_rsa Username:docker}
I1121 23:59:08.766669   43533 build_images.go:162] Building image from path: /tmp/build.1605210806.tar
I1121 23:59:08.767080   43533 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1121 23:59:08.777899   43533 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1605210806.tar
I1121 23:59:08.781834   43533 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1605210806.tar: stat -c "%s %y" /var/lib/minikube/build/build.1605210806.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1605210806.tar': No such file or directory
I1121 23:59:08.781867   43533 ssh_runner.go:362] scp /tmp/build.1605210806.tar --> /var/lib/minikube/build/build.1605210806.tar (3072 bytes)
I1121 23:59:08.802106   43533 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1605210806
I1121 23:59:08.810950   43533 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1605210806 -xf /var/lib/minikube/build/build.1605210806.tar
I1121 23:59:08.820170   43533 containerd.go:394] Building image: /var/lib/minikube/build/build.1605210806
I1121 23:59:08.820274   43533 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1605210806 --local dockerfile=/var/lib/minikube/build/build.1605210806 --output type=image,name=localhost/my-image:functional-656006
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.2s
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.1s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:6fb7ea507f8545afb01e7960624d7c8bd6795cbe860301835694bc16d1844afb
#8 exporting manifest sha256:6fb7ea507f8545afb01e7960624d7c8bd6795cbe860301835694bc16d1844afb 0.0s done
#8 exporting config sha256:7d5f702457ca8d0cfcee158e58cd1ce85161e65e80ce4df538e3b43588b11af1 0.0s done
#8 naming to localhost/my-image:functional-656006 done
#8 DONE 0.2s
I1121 23:59:11.842122   43533 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1605210806 --local dockerfile=/var/lib/minikube/build/build.1605210806 --output type=image,name=localhost/my-image:functional-656006: (3.021817031s)
I1121 23:59:11.842192   43533 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1605210806
I1121 23:59:11.852137   43533 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1605210806.tar
I1121 23:59:11.860047   43533 build_images.go:218] Built localhost/my-image:functional-656006 from /tmp/build.1605210806.tar
I1121 23:59:11.860078   43533 build_images.go:134] succeeded building to: functional-656006
I1121 23:59:11.860084   43533 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.93s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-656006
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image load --daemon kicbase/echo-server:functional-656006 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-arm64 -p functional-656006 image load --daemon kicbase/echo-server:functional-656006 --alsologtostderr: (1.01167309s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image load --daemon kicbase/echo-server:functional-656006 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-656006
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image load --daemon kicbase/echo-server:functional-656006 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image save kicbase/echo-server:functional-656006 /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image rm kicbase/echo-server:functional-656006 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image load /home/jenkins/workspace/Docker_Linux_containerd_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-656006
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-656006 image save --daemon kicbase/echo-server:functional-656006 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-656006
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-656006
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-656006
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-656006
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (192.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
E1122 00:00:58.854898    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:01:26.558423    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (3m11.874519352s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (192.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 kubectl -- rollout status deployment/busybox: (4.4083757s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-2fssx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-hgzl9 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-sc2wc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-2fssx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-hgzl9 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-sc2wc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-2fssx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-hgzl9 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-sc2wc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-2fssx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-2fssx -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-hgzl9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-hgzl9 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-sc2wc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 kubectl -- exec busybox-7b57f96db7-sc2wc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (61.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 node add --alsologtostderr -v 5
E1122 00:03:18.987882    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:18.994387    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:19.005749    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:19.027239    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:19.068769    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:19.150145    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:19.311630    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:19.633316    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:20.275629    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:21.556969    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:24.119031    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:03:29.240402    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 node add --alsologtostderr -v 5: (1m0.334447366s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5: (1.057035492s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (61.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-351012 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.105886169s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 status --output json --alsologtostderr -v 5
E1122 00:03:39.482203    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 status --output json --alsologtostderr -v 5: (1.074136784s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp testdata/cp-test.txt ha-351012:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3191978289/001/cp-test_ha-351012.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012:/home/docker/cp-test.txt ha-351012-m02:/home/docker/cp-test_ha-351012_ha-351012-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m02 "sudo cat /home/docker/cp-test_ha-351012_ha-351012-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012:/home/docker/cp-test.txt ha-351012-m03:/home/docker/cp-test_ha-351012_ha-351012-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m03 "sudo cat /home/docker/cp-test_ha-351012_ha-351012-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012:/home/docker/cp-test.txt ha-351012-m04:/home/docker/cp-test_ha-351012_ha-351012-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m04 "sudo cat /home/docker/cp-test_ha-351012_ha-351012-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp testdata/cp-test.txt ha-351012-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3191978289/001/cp-test_ha-351012-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m02:/home/docker/cp-test.txt ha-351012:/home/docker/cp-test_ha-351012-m02_ha-351012.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012 "sudo cat /home/docker/cp-test_ha-351012-m02_ha-351012.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m02:/home/docker/cp-test.txt ha-351012-m03:/home/docker/cp-test_ha-351012-m02_ha-351012-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m03 "sudo cat /home/docker/cp-test_ha-351012-m02_ha-351012-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m02:/home/docker/cp-test.txt ha-351012-m04:/home/docker/cp-test_ha-351012-m02_ha-351012-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m04 "sudo cat /home/docker/cp-test_ha-351012-m02_ha-351012-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp testdata/cp-test.txt ha-351012-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3191978289/001/cp-test_ha-351012-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m03:/home/docker/cp-test.txt ha-351012:/home/docker/cp-test_ha-351012-m03_ha-351012.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012 "sudo cat /home/docker/cp-test_ha-351012-m03_ha-351012.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m03:/home/docker/cp-test.txt ha-351012-m02:/home/docker/cp-test_ha-351012-m03_ha-351012-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m02 "sudo cat /home/docker/cp-test_ha-351012-m03_ha-351012-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m03:/home/docker/cp-test.txt ha-351012-m04:/home/docker/cp-test_ha-351012-m03_ha-351012-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m04 "sudo cat /home/docker/cp-test_ha-351012-m03_ha-351012-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp testdata/cp-test.txt ha-351012-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3191978289/001/cp-test_ha-351012-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m04:/home/docker/cp-test.txt ha-351012:/home/docker/cp-test_ha-351012-m04_ha-351012.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012 "sudo cat /home/docker/cp-test_ha-351012-m04_ha-351012.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m04:/home/docker/cp-test.txt ha-351012-m02:/home/docker/cp-test_ha-351012-m04_ha-351012-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m02 "sudo cat /home/docker/cp-test_ha-351012-m04_ha-351012-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 cp ha-351012-m04:/home/docker/cp-test.txt ha-351012-m03:/home/docker/cp-test_ha-351012-m04_ha-351012-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 ssh -n ha-351012-m03 "sudo cat /home/docker/cp-test_ha-351012-m04_ha-351012-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 node stop m02 --alsologtostderr -v 5
E1122 00:03:59.964198    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 node stop m02 --alsologtostderr -v 5: (12.179696646s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5: exit status 7 (779.12288ms)

                                                
                                                
-- stdout --
	ha-351012
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-351012-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351012-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-351012-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:04:11.845096   59962 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:04:11.845235   59962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:04:11.845248   59962 out.go:374] Setting ErrFile to fd 2...
	I1122 00:04:11.845254   59962 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:04:11.845513   59962 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:04:11.845733   59962 out.go:368] Setting JSON to false
	I1122 00:04:11.845782   59962 mustload.go:66] Loading cluster: ha-351012
	I1122 00:04:11.845833   59962 notify.go:221] Checking for updates...
	I1122 00:04:11.847088   59962 config.go:182] Loaded profile config "ha-351012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:04:11.847114   59962 status.go:174] checking status of ha-351012 ...
	I1122 00:04:11.847964   59962 cli_runner.go:164] Run: docker container inspect ha-351012 --format={{.State.Status}}
	I1122 00:04:11.867639   59962 status.go:371] ha-351012 host status = "Running" (err=<nil>)
	I1122 00:04:11.867664   59962 host.go:66] Checking if "ha-351012" exists ...
	I1122 00:04:11.867975   59962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-351012
	I1122 00:04:11.903702   59962 host.go:66] Checking if "ha-351012" exists ...
	I1122 00:04:11.904010   59962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:04:11.904062   59962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-351012
	I1122 00:04:11.930652   59962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/ha-351012/id_rsa Username:docker}
	I1122 00:04:12.033736   59962 ssh_runner.go:195] Run: systemctl --version
	I1122 00:04:12.041957   59962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:04:12.055965   59962 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:04:12.119927   59962 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:62 OomKillDisable:true NGoroutines:72 SystemTime:2025-11-22 00:04:12.108616725 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:04:12.120532   59962 kubeconfig.go:125] found "ha-351012" server: "https://192.168.49.254:8443"
	I1122 00:04:12.120581   59962 api_server.go:166] Checking apiserver status ...
	I1122 00:04:12.120631   59962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:04:12.134026   59962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1415/cgroup
	I1122 00:04:12.142593   59962 api_server.go:182] apiserver freezer: "6:freezer:/docker/33f886b30d3ec06463f46161554a06ea85592c22471ecf62abc1ae6ef0ee4ea0/kubepods/burstable/poda8b80002a30c61f7ade36df0a649631a/f4d8da0acfac519cb93a44f7334189b5e4a5173846e283ee4f7cd9b65ddb71ca"
	I1122 00:04:12.142665   59962 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/33f886b30d3ec06463f46161554a06ea85592c22471ecf62abc1ae6ef0ee4ea0/kubepods/burstable/poda8b80002a30c61f7ade36df0a649631a/f4d8da0acfac519cb93a44f7334189b5e4a5173846e283ee4f7cd9b65ddb71ca/freezer.state
	I1122 00:04:12.149997   59962 api_server.go:204] freezer state: "THAWED"
	I1122 00:04:12.150025   59962 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1122 00:04:12.159296   59962 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1122 00:04:12.159342   59962 status.go:463] ha-351012 apiserver status = Running (err=<nil>)
	I1122 00:04:12.159354   59962 status.go:176] ha-351012 status: &{Name:ha-351012 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:04:12.159384   59962 status.go:174] checking status of ha-351012-m02 ...
	I1122 00:04:12.159740   59962 cli_runner.go:164] Run: docker container inspect ha-351012-m02 --format={{.State.Status}}
	I1122 00:04:12.177343   59962 status.go:371] ha-351012-m02 host status = "Stopped" (err=<nil>)
	I1122 00:04:12.177363   59962 status.go:384] host is not running, skipping remaining checks
	I1122 00:04:12.177375   59962 status.go:176] ha-351012-m02 status: &{Name:ha-351012-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:04:12.177412   59962 status.go:174] checking status of ha-351012-m03 ...
	I1122 00:04:12.177717   59962 cli_runner.go:164] Run: docker container inspect ha-351012-m03 --format={{.State.Status}}
	I1122 00:04:12.196218   59962 status.go:371] ha-351012-m03 host status = "Running" (err=<nil>)
	I1122 00:04:12.196242   59962 host.go:66] Checking if "ha-351012-m03" exists ...
	I1122 00:04:12.196544   59962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-351012-m03
	I1122 00:04:12.215705   59962 host.go:66] Checking if "ha-351012-m03" exists ...
	I1122 00:04:12.216009   59962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:04:12.216055   59962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-351012-m03
	I1122 00:04:12.233537   59962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/ha-351012-m03/id_rsa Username:docker}
	I1122 00:04:12.337061   59962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:04:12.350605   59962 kubeconfig.go:125] found "ha-351012" server: "https://192.168.49.254:8443"
	I1122 00:04:12.350645   59962 api_server.go:166] Checking apiserver status ...
	I1122 00:04:12.350692   59962 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:04:12.364237   59962 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1341/cgroup
	I1122 00:04:12.372731   59962 api_server.go:182] apiserver freezer: "6:freezer:/docker/229e60969ecabf7813a5dfb5bbd6daa70def3086b4bee32ea63bea8be4e54b27/kubepods/burstable/podd8487caf9db30c19de43141a81564d0f/c3a9c373558c318db33a985f804f7140b115afbdff6b470b7c27d0b0f3ac9bd2"
	I1122 00:04:12.372899   59962 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/229e60969ecabf7813a5dfb5bbd6daa70def3086b4bee32ea63bea8be4e54b27/kubepods/burstable/podd8487caf9db30c19de43141a81564d0f/c3a9c373558c318db33a985f804f7140b115afbdff6b470b7c27d0b0f3ac9bd2/freezer.state
	I1122 00:04:12.380631   59962 api_server.go:204] freezer state: "THAWED"
	I1122 00:04:12.380664   59962 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1122 00:04:12.389437   59962 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1122 00:04:12.389485   59962 status.go:463] ha-351012-m03 apiserver status = Running (err=<nil>)
	I1122 00:04:12.389509   59962 status.go:176] ha-351012-m03 status: &{Name:ha-351012-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:04:12.389554   59962 status.go:174] checking status of ha-351012-m04 ...
	I1122 00:04:12.389883   59962 cli_runner.go:164] Run: docker container inspect ha-351012-m04 --format={{.State.Status}}
	I1122 00:04:12.412714   59962 status.go:371] ha-351012-m04 host status = "Running" (err=<nil>)
	I1122 00:04:12.412741   59962 host.go:66] Checking if "ha-351012-m04" exists ...
	I1122 00:04:12.413060   59962 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-351012-m04
	I1122 00:04:12.430372   59962 host.go:66] Checking if "ha-351012-m04" exists ...
	I1122 00:04:12.430739   59962 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:04:12.430795   59962 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-351012-m04
	I1122 00:04:12.448725   59962 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32803 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/ha-351012-m04/id_rsa Username:docker}
	I1122 00:04:12.557900   59962 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:04:12.572486   59962 status.go:176] ha-351012-m04 status: &{Name:ha-351012-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 node start m02 --alsologtostderr -v 5: (11.918620964s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5: (1.702768442s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.499401269s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 stop --alsologtostderr -v 5
E1122 00:04:40.927008    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 stop --alsologtostderr -v 5: (37.718478892s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 start --wait true --alsologtostderr -v 5
E1122 00:05:58.854451    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:06:02.849041    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 start --wait true --alsologtostderr -v 5: (1m0.717065949s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (98.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 node delete m03 --alsologtostderr -v 5: (10.272033524s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.24s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.81s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (36.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 stop --alsologtostderr -v 5: (36.346715518s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5: exit status 7 (110.273811ms)

                                                
                                                
-- stdout --
	ha-351012
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351012-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-351012-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:06:55.780318   74793 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:06:55.780518   74793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:06:55.780546   74793 out.go:374] Setting ErrFile to fd 2...
	I1122 00:06:55.780564   74793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:06:55.781479   74793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:06:55.781690   74793 out.go:368] Setting JSON to false
	I1122 00:06:55.781725   74793 mustload.go:66] Loading cluster: ha-351012
	I1122 00:06:55.781758   74793 notify.go:221] Checking for updates...
	I1122 00:06:55.782144   74793 config.go:182] Loaded profile config "ha-351012": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:06:55.782163   74793 status.go:174] checking status of ha-351012 ...
	I1122 00:06:55.782988   74793 cli_runner.go:164] Run: docker container inspect ha-351012 --format={{.State.Status}}
	I1122 00:06:55.800907   74793 status.go:371] ha-351012 host status = "Stopped" (err=<nil>)
	I1122 00:06:55.800929   74793 status.go:384] host is not running, skipping remaining checks
	I1122 00:06:55.800936   74793 status.go:176] ha-351012 status: &{Name:ha-351012 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:06:55.800962   74793 status.go:174] checking status of ha-351012-m02 ...
	I1122 00:06:55.801264   74793 cli_runner.go:164] Run: docker container inspect ha-351012-m02 --format={{.State.Status}}
	I1122 00:06:55.820722   74793 status.go:371] ha-351012-m02 host status = "Stopped" (err=<nil>)
	I1122 00:06:55.820757   74793 status.go:384] host is not running, skipping remaining checks
	I1122 00:06:55.820764   74793 status.go:176] ha-351012-m02 status: &{Name:ha-351012-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:06:55.820784   74793 status.go:174] checking status of ha-351012-m04 ...
	I1122 00:06:55.821070   74793 cli_runner.go:164] Run: docker container inspect ha-351012-m04 --format={{.State.Status}}
	I1122 00:06:55.842542   74793 status.go:371] ha-351012-m04 host status = "Stopped" (err=<nil>)
	I1122 00:06:55.842566   74793 status.go:384] host is not running, skipping remaining checks
	I1122 00:06:55.842574   74793 status.go:176] ha-351012-m04 status: &{Name:ha-351012-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (36.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (60.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=containerd: (59.440485718s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (60.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (92.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 node add --control-plane --alsologtostderr -v 5
E1122 00:08:18.987673    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:08:46.691068    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 node add --control-plane --alsologtostderr -v 5: (1m31.649589374s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-351012 status --alsologtostderr -v 5: (1.115543395s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (92.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.18s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.183844658s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.18s)

                                                
                                    
x
+
TestJSONOutput/start/Command (82.41s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-298277 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd
E1122 00:10:58.859721    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-298277 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=containerd: (1m22.390546075s)
--- PASS: TestJSONOutput/start/Command (82.41s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.74s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-298277 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.74s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-298277 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (1.45s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-298277 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-298277 --output=json --user=testUser: (1.451334247s)
--- PASS: TestJSONOutput/stop/Command (1.45s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-294457 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-294457 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (99.744805ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"09799c50-9eb1-4b3d-a5f5-b0981b56a0e9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-294457] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"965e106b-02d1-429c-82db-327f92dac7fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21934"}}
	{"specversion":"1.0","id":"c6d67ee4-40c2-4eec-9d3c-f98d8ed98b47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"f2275950-5555-45cf-b7c0-fe0e5a645a4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig"}}
	{"specversion":"1.0","id":"9fef1b15-7788-4b9c-a299-f75858f64891","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube"}}
	{"specversion":"1.0","id":"965394e7-876b-402a-a687-46aba138b083","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"28393474-9772-4361-bce1-b29e6555db0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"60e9eb56-ce79-49bf-a37b-98f4ac88c4c0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-294457" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-294457
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (46.99s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-531476 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-531476 --network=: (44.673787689s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-531476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-531476
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-531476: (2.29182765s)
--- PASS: TestKicCustomNetwork/create_custom_network (46.99s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (33.81s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-104994 --network=bridge
E1122 00:12:21.920422    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-104994 --network=bridge: (31.720869943s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-104994" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-104994
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-104994: (2.067292598s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (33.81s)

                                                
                                    
x
+
TestKicExistingNetwork (32.91s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1122 00:12:31.515805    5623 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1122 00:12:31.532109    5623 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1122 00:12:31.532187    5623 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1122 00:12:31.532206    5623 cli_runner.go:164] Run: docker network inspect existing-network
W1122 00:12:31.547340    5623 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1122 00:12:31.547370    5623 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1122 00:12:31.547383    5623 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1122 00:12:31.547482    5623 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1122 00:12:31.565156    5623 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-cc891483483f IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:aa:f5:f5:5e:a2:12} reservation:<nil>}
I1122 00:12:31.565485    5623 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001befd60}
I1122 00:12:31.565515    5623 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1122 00:12:31.565565    5623 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1122 00:12:31.622760    5623 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-792016 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-792016 --network=existing-network: (30.590320985s)
helpers_test.go:175: Cleaning up "existing-network-792016" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-792016
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-792016: (2.176230987s)
I1122 00:13:04.406631    5623 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.91s)

                                                
                                    
x
+
TestKicCustomSubnet (34.27s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-779590 --subnet=192.168.60.0/24
E1122 00:13:18.987987    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-779590 --subnet=192.168.60.0/24: (31.99938968s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-779590 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-779590" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-779590
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-779590: (2.247139434s)
--- PASS: TestKicCustomSubnet (34.27s)

                                                
                                    
x
+
TestKicStaticIP (39.21s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-922335 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-922335 --static-ip=192.168.200.200: (36.849131627s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-922335 ip
helpers_test.go:175: Cleaning up "static-ip-922335" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-922335
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-922335: (2.196870445s)
--- PASS: TestKicStaticIP (39.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.44s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-175960 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-175960 --driver=docker  --container-runtime=containerd: (30.187039352s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-178447 --driver=docker  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-178447 --driver=docker  --container-runtime=containerd: (34.607152002s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-175960
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-178447
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-178447" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-178447
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-178447: (2.142613579s)
helpers_test.go:175: Cleaning up "first-175960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-175960
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-175960: (2.020602375s)
--- PASS: TestMinikubeProfile (70.44s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-432288 --memory=3072 --mount-string /tmp/TestMountStartserial2050764685/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-432288 --memory=3072 --mount-string /tmp/TestMountStartserial2050764685/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.161629373s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-432288 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-434413 --memory=3072 --mount-string /tmp/TestMountStartserial2050764685/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-434413 --memory=3072 --mount-string /tmp/TestMountStartserial2050764685/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=containerd: (7.630694675s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-434413 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-432288 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-432288 --alsologtostderr -v=5: (1.701231104s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-434413 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.29s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.31s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-434413
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-434413: (1.307162792s)
--- PASS: TestMountStart/serial/Stop (1.31s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.39s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-434413
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-434413: (6.390608927s)
--- PASS: TestMountStart/serial/RestartStopped (7.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-434413 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.28s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (135.52s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-539016 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
E1122 00:15:58.854794    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-539016 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m14.988276208s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (135.52s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-539016 -- rollout status deployment/busybox: (3.527339912s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-5hfpk -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-zzn6t -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-5hfpk -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-zzn6t -- nslookup kubernetes.default
E1122 00:18:18.987481    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-5hfpk -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-zzn6t -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-5hfpk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-5hfpk -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-zzn6t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-539016 -- exec busybox-7b57f96db7-zzn6t -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (58.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-539016 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-539016 -v=5 --alsologtostderr: (57.549572986s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (58.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-539016 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp testdata/cp-test.txt multinode-539016:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp multinode-539016:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile431602474/001/cp-test_multinode-539016.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp multinode-539016:/home/docker/cp-test.txt multinode-539016-m02:/home/docker/cp-test_multinode-539016_multinode-539016-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m02 "sudo cat /home/docker/cp-test_multinode-539016_multinode-539016-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp multinode-539016:/home/docker/cp-test.txt multinode-539016-m03:/home/docker/cp-test_multinode-539016_multinode-539016-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m03 "sudo cat /home/docker/cp-test_multinode-539016_multinode-539016-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp testdata/cp-test.txt multinode-539016-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp multinode-539016-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile431602474/001/cp-test_multinode-539016-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp multinode-539016-m02:/home/docker/cp-test.txt multinode-539016:/home/docker/cp-test_multinode-539016-m02_multinode-539016.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016 "sudo cat /home/docker/cp-test_multinode-539016-m02_multinode-539016.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp multinode-539016-m02:/home/docker/cp-test.txt multinode-539016-m03:/home/docker/cp-test_multinode-539016-m02_multinode-539016-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m03 "sudo cat /home/docker/cp-test_multinode-539016-m02_multinode-539016-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp testdata/cp-test.txt multinode-539016-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp multinode-539016-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile431602474/001/cp-test_multinode-539016-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp multinode-539016-m03:/home/docker/cp-test.txt multinode-539016:/home/docker/cp-test_multinode-539016-m03_multinode-539016.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016 "sudo cat /home/docker/cp-test_multinode-539016-m03_multinode-539016.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 cp multinode-539016-m03:/home/docker/cp-test.txt multinode-539016-m02:/home/docker/cp-test_multinode-539016-m03_multinode-539016-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 ssh -n multinode-539016-m02 "sudo cat /home/docker/cp-test_multinode-539016-m03_multinode-539016-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.47s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-539016 node stop m03: (1.356833177s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-539016 status: exit status 7 (554.87396ms)

                                                
                                                
-- stdout --
	multinode-539016
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-539016-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-539016-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-539016 status --alsologtostderr: exit status 7 (558.335861ms)

                                                
                                                
-- stdout --
	multinode-539016
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-539016-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-539016-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:19:31.937075  127532 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:19:31.937357  127532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:19:31.937379  127532 out.go:374] Setting ErrFile to fd 2...
	I1122 00:19:31.937386  127532 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:19:31.938149  127532 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:19:31.938409  127532 out.go:368] Setting JSON to false
	I1122 00:19:31.938456  127532 mustload.go:66] Loading cluster: multinode-539016
	I1122 00:19:31.939074  127532 config.go:182] Loaded profile config "multinode-539016": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:19:31.939095  127532 status.go:174] checking status of multinode-539016 ...
	I1122 00:19:31.939123  127532 notify.go:221] Checking for updates...
	I1122 00:19:31.940251  127532 cli_runner.go:164] Run: docker container inspect multinode-539016 --format={{.State.Status}}
	I1122 00:19:31.961849  127532 status.go:371] multinode-539016 host status = "Running" (err=<nil>)
	I1122 00:19:31.961876  127532 host.go:66] Checking if "multinode-539016" exists ...
	I1122 00:19:31.962193  127532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-539016
	I1122 00:19:31.995139  127532 host.go:66] Checking if "multinode-539016" exists ...
	I1122 00:19:31.995927  127532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:19:31.995999  127532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-539016
	I1122 00:19:32.017275  127532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/multinode-539016/id_rsa Username:docker}
	I1122 00:19:32.117214  127532 ssh_runner.go:195] Run: systemctl --version
	I1122 00:19:32.124071  127532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:19:32.138178  127532 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:19:32.198293  127532 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:49 OomKillDisable:true NGoroutines:62 SystemTime:2025-11-22 00:19:32.18788419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:19:32.198852  127532 kubeconfig.go:125] found "multinode-539016" server: "https://192.168.67.2:8443"
	I1122 00:19:32.198889  127532 api_server.go:166] Checking apiserver status ...
	I1122 00:19:32.198936  127532 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1122 00:19:32.211526  127532 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1395/cgroup
	I1122 00:19:32.221464  127532 api_server.go:182] apiserver freezer: "6:freezer:/docker/99ead1acd5ba228a277ab9e82af511af57372a9a5450594b798729eb24d8524b/kubepods/burstable/pod3c460881e7614ffc5f20e416abf2f985/576997bdff89dcda64390950ce06b545273857ce244dbb3e821f1e765dbd9770"
	I1122 00:19:32.221550  127532 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/99ead1acd5ba228a277ab9e82af511af57372a9a5450594b798729eb24d8524b/kubepods/burstable/pod3c460881e7614ffc5f20e416abf2f985/576997bdff89dcda64390950ce06b545273857ce244dbb3e821f1e765dbd9770/freezer.state
	I1122 00:19:32.230490  127532 api_server.go:204] freezer state: "THAWED"
	I1122 00:19:32.230520  127532 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1122 00:19:32.240326  127532 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1122 00:19:32.240361  127532 status.go:463] multinode-539016 apiserver status = Running (err=<nil>)
	I1122 00:19:32.240372  127532 status.go:176] multinode-539016 status: &{Name:multinode-539016 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:19:32.240389  127532 status.go:174] checking status of multinode-539016-m02 ...
	I1122 00:19:32.240720  127532 cli_runner.go:164] Run: docker container inspect multinode-539016-m02 --format={{.State.Status}}
	I1122 00:19:32.258552  127532 status.go:371] multinode-539016-m02 host status = "Running" (err=<nil>)
	I1122 00:19:32.258575  127532 host.go:66] Checking if "multinode-539016-m02" exists ...
	I1122 00:19:32.258877  127532 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-539016-m02
	I1122 00:19:32.276802  127532 host.go:66] Checking if "multinode-539016-m02" exists ...
	I1122 00:19:32.277146  127532 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1122 00:19:32.277199  127532 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-539016-m02
	I1122 00:19:32.294527  127532 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21934-2332/.minikube/machines/multinode-539016-m02/id_rsa Username:docker}
	I1122 00:19:32.392750  127532 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1122 00:19:32.405824  127532 status.go:176] multinode-539016-m02 status: &{Name:multinode-539016-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:19:32.405861  127532 status.go:174] checking status of multinode-539016-m03 ...
	I1122 00:19:32.406211  127532 cli_runner.go:164] Run: docker container inspect multinode-539016-m03 --format={{.State.Status}}
	I1122 00:19:32.425285  127532 status.go:371] multinode-539016-m03 host status = "Stopped" (err=<nil>)
	I1122 00:19:32.425314  127532 status.go:384] host is not running, skipping remaining checks
	I1122 00:19:32.425321  127532 status.go:176] multinode-539016-m03 status: &{Name:multinode-539016-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.47s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-539016 node start m03 -v=5 --alsologtostderr: (6.972508671s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (74.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-539016
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-539016
E1122 00:19:42.052794    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-539016: (25.22676447s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-539016 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-539016 --wait=true -v=5 --alsologtostderr: (49.189107422s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-539016
--- PASS: TestMultiNode/serial/RestartKeepsNodes (74.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 node delete m03
E1122 00:20:58.854303    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-539016 node delete m03: (4.997011627s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.92s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-539016 stop: (23.927145732s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-539016 status: exit status 7 (96.230424ms)

                                                
                                                
-- stdout --
	multinode-539016
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-539016-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-539016 status --alsologtostderr: exit status 7 (91.871617ms)

                                                
                                                
-- stdout --
	multinode-539016
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-539016-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:21:24.762380  136288 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:21:24.762495  136288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:21:24.762505  136288 out.go:374] Setting ErrFile to fd 2...
	I1122 00:21:24.762510  136288 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:21:24.762777  136288 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:21:24.762956  136288 out.go:368] Setting JSON to false
	I1122 00:21:24.762982  136288 mustload.go:66] Loading cluster: multinode-539016
	I1122 00:21:24.763371  136288 config.go:182] Loaded profile config "multinode-539016": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:21:24.763381  136288 status.go:174] checking status of multinode-539016 ...
	I1122 00:21:24.763652  136288 notify.go:221] Checking for updates...
	I1122 00:21:24.763973  136288 cli_runner.go:164] Run: docker container inspect multinode-539016 --format={{.State.Status}}
	I1122 00:21:24.782599  136288 status.go:371] multinode-539016 host status = "Stopped" (err=<nil>)
	I1122 00:21:24.782621  136288 status.go:384] host is not running, skipping remaining checks
	I1122 00:21:24.782628  136288 status.go:176] multinode-539016 status: &{Name:multinode-539016 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1122 00:21:24.782659  136288 status.go:174] checking status of multinode-539016-m02 ...
	I1122 00:21:24.782957  136288 cli_runner.go:164] Run: docker container inspect multinode-539016-m02 --format={{.State.Status}}
	I1122 00:21:24.806586  136288 status.go:371] multinode-539016-m02 host status = "Stopped" (err=<nil>)
	I1122 00:21:24.806615  136288 status.go:384] host is not running, skipping remaining checks
	I1122 00:21:24.806629  136288 status.go:176] multinode-539016-m02 status: &{Name:multinode-539016-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (24.12s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (57.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-539016 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-539016 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=containerd: (57.208712987s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-539016 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (57.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.02s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-539016
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-539016-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-539016-m02 --driver=docker  --container-runtime=containerd: exit status 14 (104.184126ms)

                                                
                                                
-- stdout --
	* [multinode-539016-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-539016-m02' is duplicated with machine name 'multinode-539016-m02' in profile 'multinode-539016'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-539016-m03 --driver=docker  --container-runtime=containerd
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-539016-m03 --driver=docker  --container-runtime=containerd: (33.462499962s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-539016
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-539016: exit status 80 (325.014399ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-539016 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-539016-m03 already exists in multinode-539016-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-539016-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-539016-m03: (2.070553469s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.02s)

                                                
                                    
x
+
TestPreload (118.61s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-491018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0
E1122 00:23:18.988139    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-491018 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.32.0: (58.26494855s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-491018 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-491018 image pull gcr.io/k8s-minikube/busybox: (2.191687275s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-491018
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-491018: (5.914052711s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-491018 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-491018 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd: (49.53369618s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-491018 image list
helpers_test.go:175: Cleaning up "test-preload-491018" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-491018
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-491018: (2.455742315s)
--- PASS: TestPreload (118.61s)

                                                
                                    
x
+
TestScheduledStopUnix (107.65s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-672388 --memory=3072 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-672388 --memory=3072 --driver=docker  --container-runtime=containerd: (31.366151082s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-672388 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:25:33.160363  152120 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:25:33.160614  152120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:33.160653  152120 out.go:374] Setting ErrFile to fd 2...
	I1122 00:25:33.160674  152120 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:33.161012  152120 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:25:33.161334  152120 out.go:368] Setting JSON to false
	I1122 00:25:33.161507  152120 mustload.go:66] Loading cluster: scheduled-stop-672388
	I1122 00:25:33.161946  152120 config.go:182] Loaded profile config "scheduled-stop-672388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:25:33.162063  152120 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/config.json ...
	I1122 00:25:33.162298  152120 mustload.go:66] Loading cluster: scheduled-stop-672388
	I1122 00:25:33.162479  152120 config.go:182] Loaded profile config "scheduled-stop-672388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-672388 -n scheduled-stop-672388
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-672388 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:25:33.628233  152208 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:25:33.628439  152208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:33.628468  152208 out.go:374] Setting ErrFile to fd 2...
	I1122 00:25:33.628486  152208 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:33.628965  152208 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:25:33.629353  152208 out.go:368] Setting JSON to false
	I1122 00:25:33.629676  152208 daemonize_unix.go:73] killing process 152135 as it is an old scheduled stop
	I1122 00:25:33.629781  152208 mustload.go:66] Loading cluster: scheduled-stop-672388
	I1122 00:25:33.630430  152208 config.go:182] Loaded profile config "scheduled-stop-672388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:25:33.630519  152208 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/config.json ...
	I1122 00:25:33.630696  152208 mustload.go:66] Loading cluster: scheduled-stop-672388
	I1122 00:25:33.630835  152208 config.go:182] Loaded profile config "scheduled-stop-672388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1122 00:25:33.635326    5623 retry.go:31] will retry after 82.736µs: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.636153    5623 retry.go:31] will retry after 145.896µs: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.637288    5623 retry.go:31] will retry after 169.148µs: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.638408    5623 retry.go:31] will retry after 449.255µs: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.639525    5623 retry.go:31] will retry after 399.594µs: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.640645    5623 retry.go:31] will retry after 708.158µs: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.641765    5623 retry.go:31] will retry after 1.286791ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.643958    5623 retry.go:31] will retry after 1.066095ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.646125    5623 retry.go:31] will retry after 3.527339ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.650271    5623 retry.go:31] will retry after 5.611746ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.656839    5623 retry.go:31] will retry after 4.825805ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.661772    5623 retry.go:31] will retry after 11.76328ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.674325    5623 retry.go:31] will retry after 9.850833ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.684558    5623 retry.go:31] will retry after 14.499013ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.699813    5623 retry.go:31] will retry after 29.75339ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
I1122 00:25:33.731436    5623 retry.go:31] will retry after 44.8757ms: open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-672388 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-672388 -n scheduled-stop-672388
E1122 00:25:58.855701    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-672388
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-672388 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1122 00:25:59.572172  152697 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:25:59.572322  152697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:59.572347  152697 out.go:374] Setting ErrFile to fd 2...
	I1122 00:25:59.572353  152697 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:25:59.572629  152697 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:25:59.572920  152697 out.go:368] Setting JSON to false
	I1122 00:25:59.573075  152697 mustload.go:66] Loading cluster: scheduled-stop-672388
	I1122 00:25:59.573455  152697 config.go:182] Loaded profile config "scheduled-stop-672388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:25:59.573545  152697 profile.go:143] Saving config to /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/scheduled-stop-672388/config.json ...
	I1122 00:25:59.573753  152697 mustload.go:66] Loading cluster: scheduled-stop-672388
	I1122 00:25:59.573913  152697 config.go:182] Loaded profile config "scheduled-stop-672388": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-672388
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-672388: exit status 7 (69.979233ms)

                                                
                                                
-- stdout --
	scheduled-stop-672388
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-672388 -n scheduled-stop-672388
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-672388 -n scheduled-stop-672388: exit status 7 (72.117469ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-672388" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-672388
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-672388: (4.648718418s)
--- PASS: TestScheduledStopUnix (107.65s)

                                                
                                    
x
+
TestInsufficientStorage (13.32s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-682744 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-682744 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (10.621638314s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c56a8e96-ad24-42f2-8edf-36a0448025c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-682744] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c908951-7752-4c0f-8b9b-d2cd5caffc75","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21934"}}
	{"specversion":"1.0","id":"5da58c92-ebb5-44f0-b3f5-e5b3f13753b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"feede39a-597a-4099-915d-a8eb6c2f3138","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig"}}
	{"specversion":"1.0","id":"6ef10fba-5e75-4fbf-a04b-9b31b6cb6f35","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube"}}
	{"specversion":"1.0","id":"533d5fa2-986c-4973-a598-eccbf948db81","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"79698a62-9fd2-44dd-838c-bd41302bf775","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0042fa3d-e6a5-40c9-8133-5e1f0bedcabc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"ce2e8cbd-8e9e-4054-90a6-37eb05db7333","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"59d3e1a4-b690-4906-b56d-72626b70c38c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"d3a7ade1-02ef-46e9-9eda-47a6168543e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"6588ea01-6332-484c-acf0-620584e7ba31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-682744\" primary control-plane node in \"insufficient-storage-682744\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"f30bfbf6-c3be-42e4-ba2c-39d7f32541fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763588073-21934 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"97605766-ffa9-440a-94ed-de5e4fe820f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"8b24e398-d6ac-4654-9ec9-db99cf720067","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-682744 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-682744 --output=json --layout=cluster: exit status 7 (422.033861ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-682744","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-682744","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 00:27:00.402611  154514 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-682744" does not appear in /home/jenkins/minikube-integration/21934-2332/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-682744 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-682744 --output=json --layout=cluster: exit status 7 (312.580235ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-682744","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-682744","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1122 00:27:00.719149  154580 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-682744" does not appear in /home/jenkins/minikube-integration/21934-2332/kubeconfig
	E1122 00:27:00.729551  154580 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/insufficient-storage-682744/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-682744" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-682744
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-682744: (1.96271132s)
--- PASS: TestInsufficientStorage (13.32s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.31s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
E1122 00:30:58.855201    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3664697751 start -p running-upgrade-632974 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3664697751 start -p running-upgrade-632974 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.421451873s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-632974 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-632974 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (26.675439198s)
helpers_test.go:175: Cleaning up "running-upgrade-632974" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-632974
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-632974: (2.132159599s)
--- PASS: TestRunningBinaryUpgrade (69.31s)

                                                
                                    
x
+
TestKubernetesUpgrade (352.33s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E1122 00:29:01.921746    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.329784293s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-381698
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-381698: (1.334093046s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-381698 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-381698 status --format={{.Host}}: exit status 7 (66.259463ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m50.369219965s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-381698 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 106 (164.48927ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-381698] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-381698
	    minikube start -p kubernetes-upgrade-381698 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-3816982 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-381698 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-381698 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (17.589084924s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-381698" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-381698
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-381698: (2.342869819s)
--- PASS: TestKubernetesUpgrade (352.33s)

                                                
                                    
x
+
TestMissingContainerUpgrade (168.21s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1944377649 start -p missing-upgrade-720642 --memory=3072 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1944377649 start -p missing-upgrade-720642 --memory=3072 --driver=docker  --container-runtime=containerd: (1m14.206565388s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-720642
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-720642
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-720642 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-720642 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m22.868857018s)
helpers_test.go:175: Cleaning up "missing-upgrade-720642" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-720642
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-720642: (1.932597338s)
--- PASS: TestMissingContainerUpgrade (168.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117900 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-117900 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=containerd: exit status 14 (95.626656ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-117900] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117900 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-117900 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (40.323667391s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-117900 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (24.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117900 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-117900 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (22.142305087s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-117900 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-117900 status -o json: exit status 2 (395.277604ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-117900","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-117900
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-117900: (2.224273986s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (24.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117900 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-117900 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (9.287635982s)
--- PASS: TestNoKubernetes/serial/Start (9.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21934-2332/.minikube/cache/linux/arm64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-117900 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-117900 "sudo systemctl is-active --quiet service kubelet": exit status 1 (443.807405ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-arm64 profile list
E1122 00:28:18.988210    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-arm64 profile list: (2.955674619s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (3.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-117900
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-117900: (1.403822517s)
--- PASS: TestNoKubernetes/serial/Stop (1.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (6.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-117900 --driver=docker  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-117900 --driver=docker  --container-runtime=containerd: (6.602876244s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (6.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-117900 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-117900 "sudo systemctl is-active --quiet service kubelet": exit status 1 (267.79449ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.27s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (8s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (8.00s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (52.85s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3777275213 start -p stopped-upgrade-139064 --memory=3072 --vm-driver=docker  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3777275213 start -p stopped-upgrade-139064 --memory=3072 --vm-driver=docker  --container-runtime=containerd: (32.41292857s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3777275213 -p stopped-upgrade-139064 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3777275213 -p stopped-upgrade-139064 stop: (1.26918701s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-139064 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-139064 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (19.163719159s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (52.85s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-139064
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-139064: (1.472254374s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                    
x
+
TestPause/serial/Start (52.93s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-180349 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-180349 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (52.926152175s)
--- PASS: TestPause/serial/Start (52.93s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-180349 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-180349 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (6.489486792s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.50s)

                                                
                                    
x
+
TestPause/serial/Pause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-180349 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.71s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-180349 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-180349 --output=json --layout=cluster: exit status 2 (341.542487ms)

                                                
                                                
-- stdout --
	{"Name":"pause-180349","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-180349","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.61s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-180349 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.61s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.91s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-180349 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.91s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.97s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-180349 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-180349 --alsologtostderr -v=5: (2.970800536s)
--- PASS: TestPause/serial/DeletePaused (2.97s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-180349
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-180349: exit status 1 (21.327338ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-180349: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-482944 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-482944 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=containerd: exit status 14 (259.548129ms)

                                                
                                                
-- stdout --
	* [false-482944] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21934
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1122 00:33:52.147384  193425 out.go:360] Setting OutFile to fd 1 ...
	I1122 00:33:52.147552  193425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:52.147614  193425 out.go:374] Setting ErrFile to fd 2...
	I1122 00:33:52.147635  193425 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1122 00:33:52.147887  193425 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21934-2332/.minikube/bin
	I1122 00:33:52.148315  193425 out.go:368] Setting JSON to false
	I1122 00:33:52.149189  193425 start.go:133] hostinfo: {"hostname":"ip-172-31-31-251","uptime":4570,"bootTime":1763767063,"procs":171,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1122 00:33:52.149289  193425 start.go:143] virtualization:  
	I1122 00:33:52.153366  193425 out.go:179] * [false-482944] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I1122 00:33:52.157549  193425 out.go:179]   - MINIKUBE_LOCATION=21934
	I1122 00:33:52.157572  193425 notify.go:221] Checking for updates...
	I1122 00:33:52.166849  193425 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1122 00:33:52.171229  193425 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21934-2332/kubeconfig
	I1122 00:33:52.174740  193425 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21934-2332/.minikube
	I1122 00:33:52.177982  193425 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1122 00:33:52.180877  193425 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1122 00:33:52.184280  193425 config.go:182] Loaded profile config "kubernetes-upgrade-381698": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
	I1122 00:33:52.184392  193425 driver.go:422] Setting default libvirt URI to qemu:///system
	I1122 00:33:52.210040  193425 docker.go:124] docker version: linux-28.1.1:Docker Engine - Community
	I1122 00:33:52.210162  193425 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1122 00:33:52.333755  193425 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:true NGoroutines:52 SystemTime:2025-11-22 00:33:52.298050228 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I1122 00:33:52.333855  193425 docker.go:319] overlay module found
	I1122 00:33:52.339144  193425 out.go:179] * Using the docker driver based on user configuration
	I1122 00:33:52.342067  193425 start.go:309] selected driver: docker
	I1122 00:33:52.342090  193425 start.go:930] validating driver "docker" against <nil>
	I1122 00:33:52.342103  193425 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1122 00:33:52.345833  193425 out.go:203] 
	W1122 00:33:52.349083  193425 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1122 00:33:52.352317  193425 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-482944 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-482944" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-381698
contexts:
- context:
cluster: kubernetes-upgrade-381698
user: kubernetes-upgrade-381698
name: kubernetes-upgrade-381698
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-381698
user:
client-certificate: /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/kubernetes-upgrade-381698/client.crt
client-key: /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/kubernetes-upgrade-381698/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-482944

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-482944"

                                                
                                                
----------------------- debugLogs end: false-482944 [took: 5.194852897s] --------------------------------
helpers_test.go:175: Cleaning up "false-482944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-482944
--- PASS: TestNetworkPlugins/group/false (5.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (59.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
E1122 00:35:58.854499    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:36:22.054511    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (59.583440122s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (59.58s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-187160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-187160 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.100241453s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-187160 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-187160 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-187160 --alsologtostderr -v=3: (12.147859328s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-187160 -n old-k8s-version-187160
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-187160 -n old-k8s-version-187160: exit status 7 (70.697512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-187160 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (28.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-187160 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.28.0: (28.148589782s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-187160 -n old-k8s-version-187160
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (28.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jj4sd" [59a18b9f-6e31-424d-ba0d-a259c5b33cc6] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jj4sd" [59a18b9f-6e31-424d-ba0d-a259c5b33cc6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.00423266s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-jj4sd" [59a18b9f-6e31-424d-ba0d-a259c5b33cc6] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003626966s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-187160 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-187160 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-187160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-187160 -n old-k8s-version-187160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-187160 -n old-k8s-version-187160: exit status 2 (339.730835ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-187160 -n old-k8s-version-187160
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-187160 -n old-k8s-version-187160: exit status 2 (349.094722ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-187160 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-187160 -n old-k8s-version-187160
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-187160 -n old-k8s-version-187160
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m32.136027066s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (92.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (82.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1122 00:38:18.987778    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m22.413147644s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (82.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-080784 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-080784 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-080784 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-080784 --alsologtostderr -v=3: (12.361335831s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784: exit status 7 (78.627775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-080784 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.89s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-080784 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (58.467281649s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (58.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.57s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-540723 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-540723 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.389683529s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-540723 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-540723 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-540723 --alsologtostderr -v=3: (12.951029995s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.95s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-540723 -n embed-certs-540723
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-540723 -n embed-certs-540723: exit status 7 (80.339203ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-540723 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (54.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-540723 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (53.968378183s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-540723 -n embed-certs-540723
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (54.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vw54c" [fe93edb1-ce75-42a7-b0eb-200ad010db77] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003573655s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-vw54c" [fe93edb1-ce75-42a7-b0eb-200ad010db77] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003665755s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-080784 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-080784 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.18s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-080784 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784: exit status 2 (333.685118ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784: exit status 2 (332.831125ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-080784 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-080784 -n default-k8s-diff-port-080784
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (73.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-734654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-734654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (1m13.912798191s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (73.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m5cbn" [0e3001d0-7596-4a8f-856c-47243e4d4605] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004005103s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-m5cbn" [0e3001d0-7596-4a8f-856c-47243e4d4605] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00315915s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-540723 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-540723 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (4.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-540723 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p embed-certs-540723 --alsologtostderr -v=1: (1.064890741s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-540723 -n embed-certs-540723
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-540723 -n embed-certs-540723: exit status 2 (423.974579ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-540723 -n embed-certs-540723
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-540723 -n embed-certs-540723: exit status 2 (373.695931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-540723 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-540723 -n embed-certs-540723
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-540723 -n embed-certs-540723
--- PASS: TestStartStop/group/embed-certs/serial/Pause (4.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-953404 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1122 00:41:25.263685    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:25.270020    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:25.281369    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:25.302724    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:25.344799    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:25.426159    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:25.587598    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:25.909200    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:26.551231    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:27.833239    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:30.394978    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:35.516267    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:41:45.757581    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:42:06.239520    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-953404 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (43.061227886s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.06s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-953404 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-953404 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-953404 --alsologtostderr -v=3: (1.371112624s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-953404 -n newest-cni-953404
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-953404 -n newest-cni-953404: exit status 7 (71.835902ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-953404 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (20.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-953404 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-953404 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (19.727329156s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-953404 -n newest-cni-953404
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (20.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-953404 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-734654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-734654 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.339492395s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-734654 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.46s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-953404 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-953404 -n newest-cni-953404
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-953404 -n newest-cni-953404: exit status 2 (561.392588ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-953404 -n newest-cni-953404
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-953404 -n newest-cni-953404: exit status 2 (485.700931ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-953404 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-953404 -n newest-cni-953404
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-953404 -n newest-cni-953404
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-734654 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-734654 --alsologtostderr -v=3: (12.645965339s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (84.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=containerd: (1m24.586176493s)
--- PASS: TestNetworkPlugins/group/auto/Start (84.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-734654 -n no-preload-734654
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-734654 -n no-preload-734654: exit status 7 (70.19788ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-734654 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-734654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1
E1122 00:42:47.201536    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:43:18.987445    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-734654 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.34.1: (57.643626441s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-734654 -n no-preload-734654
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lhl8d" [bf2866eb-7ba7-4ee3-a35d-d8fd1ba9897c] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00405015s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-lhl8d" [bf2866eb-7ba7-4ee3-a35d-d8fd1ba9897c] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003900671s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-734654 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-734654 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-734654 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-734654 -n no-preload-734654
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-734654 -n no-preload-734654: exit status 2 (370.888731ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-734654 -n no-preload-734654
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-734654 -n no-preload-734654: exit status 2 (356.205698ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-734654 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-734654 -n no-preload-734654
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-734654 -n no-preload-734654
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (83.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=containerd: (1m23.666146s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (83.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-482944 "pgrep -a kubelet"
I1122 00:44:00.977496    5623 config.go:182] Loaded profile config "auto-482944": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-482944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-9lnmn" [3ee3f41b-30d6-44a1-899a-236156ee10be] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-9lnmn" [3ee3f41b-30d6-44a1-899a-236156ee10be] Running
E1122 00:44:09.123213    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003900239s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-482944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (71.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd
E1122 00:44:55.199446    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=containerd: (1m11.319012442s)
--- PASS: TestNetworkPlugins/group/calico/Start (71.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-cnf2s" [5a66ce7b-6c15-4c96-81c6-698f1ba0f08b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003766401s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-482944 "pgrep -a kubelet"
I1122 00:45:29.616802    5623 config.go:182] Loaded profile config "kindnet-482944": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-482944 replace --force -f testdata/netcat-deployment.yaml
I1122 00:45:30.030124    5623 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cm4jd" [f2fdc74b-1992-4d36-b7a0-0a62e1cf68b4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cm4jd" [f2fdc74b-1992-4d36-b7a0-0a62e1cf68b4] Running
E1122 00:45:36.160749    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004778841s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-482944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9b4ks" [a5291089-0390-4e07-86c1-1cec18336247] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005718061s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-482944 "pgrep -a kubelet"
I1122 00:45:57.307297    5623 config.go:182] Loaded profile config "calico-482944": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-482944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-67ks6" [82bc5bc3-0cb5-476f-b058-99535e187939] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1122 00:45:58.856656    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/addons-336804/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-67ks6" [82bc5bc3-0cb5-476f-b058-99535e187939] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004451536s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (68.7s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=containerd: (1m8.699789794s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (68.70s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-482944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (51.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E1122 00:46:52.964687    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/old-k8s-version-187160/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:46:58.082991    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/default-k8s-diff-port-080784/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:47:13.163993    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:47:13.170466    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:47:13.181878    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:47:13.203451    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:47:13.245008    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:47:13.326969    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:47:13.488415    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (51.276077183s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (51.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-482944 "pgrep -a kubelet"
E1122 00:47:13.810612    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1122 00:47:13.940168    5623 config.go:182] Loaded profile config "custom-flannel-482944": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-482944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wzcbc" [cfa68a87-dde5-4c01-8ff3-302271c053bc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1122 00:47:14.452215    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:47:15.734506    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:47:18.296181    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-wzcbc" [cfa68a87-dde5-4c01-8ff3-302271c053bc] Running
E1122 00:47:23.417962    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004198902s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-482944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-482944 "pgrep -a kubelet"
I1122 00:47:27.614561    5623 config.go:182] Loaded profile config "enable-default-cni-482944": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-482944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-bcpv8" [4438bc67-e35c-4d49-a8c3-3b58aa443743] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-bcpv8" [4438bc67-e35c-4d49-a8c3-3b58aa443743] Running
E1122 00:47:33.659971    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004507599s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-482944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (67.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd
E1122 00:47:54.142690    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=containerd: (1m7.615561417s)
--- PASS: TestNetworkPlugins/group/flannel/Start (67.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (87.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd
E1122 00:48:18.987547    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/functional-656006/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:48:35.104297    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/no-preload-734654/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-482944 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=containerd: (1m27.025942828s)
--- PASS: TestNetworkPlugins/group/bridge/Start (87.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-t5r9j" [36cb8bf8-d634-427a-8b7b-3ce64d4f7775] Running
E1122 00:49:01.376088    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:49:01.382534    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:49:01.393999    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:49:01.415472    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:49:01.456959    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:49:01.538559    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:49:01.700046    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:49:02.021570    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004896128s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-482944 "pgrep -a kubelet"
I1122 00:49:02.375285    5623 config.go:182] Loaded profile config "flannel-482944": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-482944 replace --force -f testdata/netcat-deployment.yaml
E1122 00:49:02.663613    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-x5vgn" [75021765-34aa-4b19-909a-d78030dc46a5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1122 00:49:03.944937    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1122 00:49:06.506753    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-x5vgn" [75021765-34aa-4b19-909a-d78030dc46a5] Running
E1122 00:49:11.629237    5623 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/auto-482944/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003640744s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-482944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-482944 "pgrep -a kubelet"
I1122 00:49:29.275685    5623 config.go:182] Loaded profile config "bridge-482944": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-482944 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-c2cdj" [91b07753-1db0-44ae-b208-21f2bdbe4b30] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-c2cdj" [91b07753-1db0-44ae-b208-21f2bdbe4b30] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 11.004214694s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (11.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-482944 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-482944 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (30/333)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.43s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-803352 --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:248: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-803352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-803352
--- SKIP: TestDownloadOnlyKic (0.43s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-577767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-577767
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-482944 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-482944" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-381698
contexts:
- context:
cluster: kubernetes-upgrade-381698
user: kubernetes-upgrade-381698
name: kubernetes-upgrade-381698
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-381698
user:
client-certificate: /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/kubernetes-upgrade-381698/client.crt
client-key: /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/kubernetes-upgrade-381698/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-482944

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-482944"

                                                
                                                
----------------------- debugLogs end: kubenet-482944 [took: 4.210336175s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-482944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-482944
--- SKIP: TestNetworkPlugins/group/kubenet (4.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-482944 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-482944" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21934-2332/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 22 Nov 2025 00:29:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-381698
contexts:
- context:
cluster: kubernetes-upgrade-381698
user: kubernetes-upgrade-381698
name: kubernetes-upgrade-381698
current-context: ""
kind: Config
preferences: {}
users:
- name: kubernetes-upgrade-381698
user:
client-certificate: /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/kubernetes-upgrade-381698/client.crt
client-key: /home/jenkins/minikube-integration/21934-2332/.minikube/profiles/kubernetes-upgrade-381698/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-482944

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-482944" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-482944"

                                                
                                                
----------------------- debugLogs end: cilium-482944 [took: 5.69171526s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-482944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-482944
--- SKIP: TestNetworkPlugins/group/cilium (5.94s)

                                                
                                    
Copied to clipboard